I've been thinking a lot about the topics broached in the last post, and how we learn the darnedest things without realizing it. Get into the habit, as it were. The darnedest of these habits is the one where you are expected to give a too full account of yourself - covering every aspect that may or may not be relevant to what's going on.
Academia in particular fosters this kind of habit, in a very explicit fashion. When writing anything larger than a paper, you have to describe your methodology down to the nuts and bolts, and then evaluate the merits of the chosen brands of bolts against other brands. It is all very down to the wire and very literal - no obfuscation or ambiguity is allowed. Everything has to be explained, and it has to be explained well.
Part of this comes down to the scientific method, and the importance placed on being able to replicate results of previous efforts. Like recipes, they becomes easier to follow when the steps are clearly laid out in a straightforward manner. But another, I suspect more important, part of it is that it's become a habit which is (pun very much intended) habitually applied even when scientific exactitude is not on the line. It keeps going even outside the specific domains wherein it is a virtue.
Thus, we often see young academics spring into action writing in public about past thinkers with alacrity. Five posts into the new blog, they are grappling with the significance of semicolons in Derrida or the travails of making sense of the few remaining pre-Socratic fragments in context. Which is all very interesting, to be sure, but it is also way too specific, too explicit - too much in the vein of giving an account of oneself and justifying one's claim to be an academic. As if failing to live up to the implicit academic ideal would disqualify one's efforts and forever cement a reputation of being a fraud, a know-nothing, an amateur. It's all or nothing, and "all" includes lengthy and explicit accounts of whatever one happens to be doing at the moment.
If this sounds very much like impostor syndrome, it's because that's what it is.
That academics in particular fall into this habit more often than others is no accident. It goes with the territory: introductory courses demand that you show you've read the literature, with intermediary courses demanding more of it, and theses being a culmination of expository discourse. At every step of the way, you have to prove yourself, justifying each and every sentence. Down to the semicolon.
It is a very difficult habit to shake indeed.
Thursday, December 20, 2018
Wednesday, November 28, 2018
Shaking the habit
As I am ever faster approaching the end of my time as a master student, I naturally become more reflective of times past. Not only because these times are soon to be irrevocably over for my part, but also because others are bound to begin the same journey soon enough. It's an integral aspect of institutionalized education - there have been others before you, and there will be others after you. The show doth go on.
Part of this process is seeing young ones (they do get younger every year) enter into university life and encountering everything for the first time. And second time. And a lot of times until they, probably, hopefully, possibly get it. Or get a degree, whichever comes first. Their path is as personal as it is predictable - first comes the obstacle of getting into reading, then the obstacle of getting into writing, and somewhere along the way it all translates into getting into thinking. One step at a time, until either understanding or degree undergoes the formality of actually occurring.
I don't say this to imply that bachelors leave with a partial understanding of whatever field they've studied. Everyone has a partial understanding of everything - the world is to big for it to be otherwise. Rather, what I mean to imply is that there are habits that one gets into that one does not necessarily get out of. The most prominent of these habits is to write in the way that is demanded by the university system - particularly, in a way that shows you have actually read the literature on the syllabus. It is an easy habit to get into, seeing as it keeps graders off your back. However, it takes work to get out of this habit once the need for it has passed.
Like, say, after graduation.
The reasons for teaching such a style of writing are many, but most of them relate to a vaguely protestant notion of having Performed the Work. Thus, it is routinely demanded that students include page numbers whenever they refer to an author, rather than just make the reference in general. The only way to actually have a page number on hand, it is assumed, is if you have the opened and read book right there in front of you. This goes for every reference at all times, meaning that writing becomes an bibliographically exhausting (not to be confused with exhaustive) effort - every time an author is mentioned, there has to be a page number to go along with it. To show that you indeed read the book.
If you at this point are having flashbacks to your university days, I apologize.
Thing is. Unless you are doing heavy duty exegesis where every word has to be read and understood in context, there is no reason to write like this outside of the educational setting. It is sufficient to give an account of what the author wrote, get the year of publication right, and move along with whatever argument you are trying to make. Especially if you are trying to tie several authors' lines of thought together - the overall thrust of your discursive momentum is sufficient to give context to your writing, and the addition (or, as the case might be, subtraction) of page numbers will not substantially contribute anything.
The same tendency can be seen, albeit writ large, in comprehensive introductions which mention every author in the field before getting to the topic at hand. This is an excellent writing strategy if you need to convey that you have read about a number of theories and can place them in a proper context. It is a less than excellent strategy if you want to write a compelling introduction which gets right to the heart of things.
There are a number of these habits that gets drilled into you during your university years. Most of them are there to make you easier to evaluate (and subsequently grade), others are accidental. Some are even useful. The key to moving forward is to take a look at oneself and assess which habits still serve a purpose, which have to be unlearned, and which have to be kept in a state of being just remembered enough so that you can give useful advice to new students upon encountering them. The goal, of course, being to nudge these young fellows toward the habit of thinking, rather than settling for a habit of performing.
The show doth go on.
Part of this process is seeing young ones (they do get younger every year) enter into university life and encountering everything for the first time. And second time. And a lot of times until they, probably, hopefully, possibly get it. Or get a degree, whichever comes first. Their path is as personal as it is predictable - first comes the obstacle of getting into reading, then the obstacle of getting into writing, and somewhere along the way it all translates into getting into thinking. One step at a time, until either understanding or degree undergoes the formality of actually occurring.
I don't say this to imply that bachelors leave with a partial understanding of whatever field they've studied. Everyone has a partial understanding of everything - the world is to big for it to be otherwise. Rather, what I mean to imply is that there are habits that one gets into that one does not necessarily get out of. The most prominent of these habits is to write in the way that is demanded by the university system - particularly, in a way that shows you have actually read the literature on the syllabus. It is an easy habit to get into, seeing as it keeps graders off your back. However, it takes work to get out of this habit once the need for it has passed.
Like, say, after graduation.
The reasons for teaching such a style of writing are many, but most of them relate to a vaguely protestant notion of having Performed the Work. Thus, it is routinely demanded that students include page numbers whenever they refer to an author, rather than just make the reference in general. The only way to actually have a page number on hand, it is assumed, is if you have the opened and read book right there in front of you. This goes for every reference at all times, meaning that writing becomes an bibliographically exhausting (not to be confused with exhaustive) effort - every time an author is mentioned, there has to be a page number to go along with it. To show that you indeed read the book.
If you at this point are having flashbacks to your university days, I apologize.
Thing is. Unless you are doing heavy duty exegesis where every word has to be read and understood in context, there is no reason to write like this outside of the educational setting. It is sufficient to give an account of what the author wrote, get the year of publication right, and move along with whatever argument you are trying to make. Especially if you are trying to tie several authors' lines of thought together - the overall thrust of your discursive momentum is sufficient to give context to your writing, and the addition (or, as the case might be, subtraction) of page numbers will not substantially contribute anything.
The same tendency can be seen, albeit writ large, in comprehensive introductions which mention every author in the field before getting to the topic at hand. This is an excellent writing strategy if you need to convey that you have read about a number of theories and can place them in a proper context. It is a less than excellent strategy if you want to write a compelling introduction which gets right to the heart of things.
There are a number of these habits that gets drilled into you during your university years. Most of them are there to make you easier to evaluate (and subsequently grade), others are accidental. Some are even useful. The key to moving forward is to take a look at oneself and assess which habits still serve a purpose, which have to be unlearned, and which have to be kept in a state of being just remembered enough so that you can give useful advice to new students upon encountering them. The goal, of course, being to nudge these young fellows toward the habit of thinking, rather than settling for a habit of performing.
The show doth go on.
Thursday, October 25, 2018
Getting ahead, one leg at a time
There are three different ways of going about riding a bike.
The most intuitive way is to give it your all and effort to maintain maximum speed, Tour de France style, where it is all muscle all the time. Full speed ahead, legs thumping and whooshing. Oontz oontz oontz oontz. Faster, harder, overtake that Scooter. This usually occurs when you are in a hurry to get somewhere, want the exercise, or simply have not thought too hard about how to go about biking.
Then there is the economical way, where you effort just enough to get the bike into enough sufficiently sustained momentum that you can move forward without additional input. Just keep on rolling, maybe lean forward a bit, until more power is required. Then repeat the process as many times as necessary, alternating between building momentum and effortlessly moving forward. Eventually, you'll get where you are going, minimum effort style.
And then there is the low-speed high effort method, where you effort just enough to get moving, but not enough to actually move at sustained speed, and thus have to continually apply leg power to move at a crawl. Friction and gravity keeps on slowing down the bike to such an extent that every pedal push becomes akin to the initial oomph to transition from standstill to motion. Previous efforts do not accumulate or help you sustain momentum, and every iota of speed has to be reestablished anew every step along the way.
At this point, you might be asking yourself - is this some sort of metaphor for life in general, where the different modes of biking represent different approaches to everyday activities and how to approach them? Or, possibly also, different states of mind that a person might slip into as they go about doing the thing called being alive?
To which I say: yes. Yes it is.
The most intuitive way is to give it your all and effort to maintain maximum speed, Tour de France style, where it is all muscle all the time. Full speed ahead, legs thumping and whooshing. Oontz oontz oontz oontz. Faster, harder, overtake that Scooter. This usually occurs when you are in a hurry to get somewhere, want the exercise, or simply have not thought too hard about how to go about biking.
Then there is the economical way, where you effort just enough to get the bike into enough sufficiently sustained momentum that you can move forward without additional input. Just keep on rolling, maybe lean forward a bit, until more power is required. Then repeat the process as many times as necessary, alternating between building momentum and effortlessly moving forward. Eventually, you'll get where you are going, minimum effort style.
And then there is the low-speed high effort method, where you effort just enough to get moving, but not enough to actually move at sustained speed, and thus have to continually apply leg power to move at a crawl. Friction and gravity keeps on slowing down the bike to such an extent that every pedal push becomes akin to the initial oomph to transition from standstill to motion. Previous efforts do not accumulate or help you sustain momentum, and every iota of speed has to be reestablished anew every step along the way.
At this point, you might be asking yourself - is this some sort of metaphor for life in general, where the different modes of biking represent different approaches to everyday activities and how to approach them? Or, possibly also, different states of mind that a person might slip into as they go about doing the thing called being alive?
To which I say: yes. Yes it is.
Saturday, September 1, 2018
An analytical poke
Every now and again I come to think of the big disconnect between the act of performing rhetorical communication and rhetorical analysis. Rhetorical communication happens any time you strategically choose your words to get someone to do something (from passing the salt to approving a bank loan). Rhetorical analysis is the act of looking really closely at some sort of rhetorical communication and analyzing what's going on in it. The communication usually happens very fast, and the analysis very slowly. That's the disconnect.
The disconnect is, of course, inevitable. An analysis has to perform many tasks, and be explicit about most of them. It has to provide context, justify the significance of the communicative act under analysis, and describe it in sufficient detail to convey to readers what's going on. This takes quite a number of words, even if only performed with the minimum of surplus verbiage. Even after subsequent revisions with the explicit intent to reduce word count, there will by necessity be a substantial amount of words to it. It goes with the territory.
The communicative act, on the other hand, only has to do what it set out to do. Once done, it's over, and other things can commence. In trivial cases, it literally takes seconds - the salt is passed. In other cases, it can take a bit longer, but tends to be limited by the physical constraints of the human body. A speech can only be so long. All said and done, other things happen. Life goes on.
Thus, analyses tend to end up being much ado about seemingly nothing. On first glance, you might wonder how it is even possible to write thousands of words about something that takes seconds to perform. Then you dig into it and discover that there's a lot going on in that one moment, which indeed needed all those words to unpack. Worse, you begin to look at similar situations for similar implications - the analysis continues inside you. Further communicative acts require at least some thought before they become routine again.
The power of rhetorical analysis lies in this disconnect. A good analysis will disconnect you from a situation, and then force you to reconnect to it in a new way. You think you knew what's going on, but looking back on it you realize that, no, there's more to it. Your perspective has changed, and so you must pay attention to the differences made visible. You have permission to be perturbed.
In all this, life goes on. But you still have to reconnect.
The disconnect is, of course, inevitable. An analysis has to perform many tasks, and be explicit about most of them. It has to provide context, justify the significance of the communicative act under analysis, and describe it in sufficient detail to convey to readers what's going on. This takes quite a number of words, even if only performed with the minimum of surplus verbiage. Even after subsequent revisions with the explicit intent to reduce word count, there will by necessity be a substantial amount of words to it. It goes with the territory.
The communicative act, on the other hand, only has to do what it set out to do. Once done, it's over, and other things can commence. In trivial cases, it literally takes seconds - the salt is passed. In other cases, it can take a bit longer, but tends to be limited by the physical constraints of the human body. A speech can only be so long. All said and done, other things happen. Life goes on.
Thus, analyses tend to end up being much ado about seemingly nothing. On first glance, you might wonder how it is even possible to write thousands of words about something that takes seconds to perform. Then you dig into it and discover that there's a lot going on in that one moment, which indeed needed all those words to unpack. Worse, you begin to look at similar situations for similar implications - the analysis continues inside you. Further communicative acts require at least some thought before they become routine again.
The power of rhetorical analysis lies in this disconnect. A good analysis will disconnect you from a situation, and then force you to reconnect to it in a new way. You think you knew what's going on, but looking back on it you realize that, no, there's more to it. Your perspective has changed, and so you must pay attention to the differences made visible. You have permission to be perturbed.
In all this, life goes on. But you still have to reconnect.
Tuesday, August 7, 2018
Active listening by the numbers
I like hearing mathematicians talk about doing math things. Usually I do not follow along, but it's nice to listen to. The [name] algorithm passed through the [name] filter and then cleaned up through the [name] process. It all makes sense to someone, probably.
There are different ways to approach discourse you do not understand. One is to simply throw up your hands and declare you do not understand any of it. Sometimes, this is in fact the most useful approach - radical honesty and all that.
Another approach is to take what you do understand and try to parse it using available data. If we know that, in math, procedures are often named after those who formulate them, we can gather that each time a name is mentioned, some sort of procedure is brought into the context. The names themselves do not matter as much as the courses of action they connote; they are shorthand for what to do and how to go about it.
This does not make the specifics any clearer, to be sure. But when someone objects "but what about the [name] conjecture?", you are now clued in to the fact that there is something amiss with the proposed course of action, which needs to be addressed. The content of this objection is unknown, but the form of it is clear. Whatever comes next - be it an "oh, but the [name] postulate solves that" or a heartfelt "shit shit shit shit shit" - your act of active listening has provided you with some insight into what's going on.
The same goes for any context. There are always two conversations going on at any given time, where one might be more prominent than the other. There is the factual conversation where specifics are tossed around left and right, where knowing what's what helps tremendously. These facts are timeless, and can be grappled with later on, on their own terms. There is also a very time-sensitive conversation going on in the now, where everything is specific to the very moment it is happening. This is the realm of moods, postures, physical positioning within the room, hierarchy - everything that affects a situation without necessarily being explicitly mentioned by anyone involved.
Needless to say, the fact that it goes without saying does not mean it is unimportant.
There is an ideal out there that conversations ought to take place solely in terms of the first conversation. Putting ideas against each other and all that. It has the merit of being an ideal, but as an analytic approach to actual social situations, it leaves out too many relevant aspects to generate useful insight. The "shit shit shit" response above might be a response to the fact that the [name] conjecture makes the thing difficult to perform, but it might also be a response to the fact that the person in question was planning on going home early that day, and just had that very plan dashed to pieces right there and then. Merely thinking in terms of content leaves out the very real life implications of form.
Be sure to keep both ears open as you move through life. Arguments are very seldom about the things they are about, and there are cases where losing the argument in the first kind of conversation means winning in the second. You just have to know to listen for it.
There are different ways to approach discourse you do not understand. One is to simply throw up your hands and declare you do not understand any of it. Sometimes, this is in fact the most useful approach - radical honesty and all that.
Another approach is to take what you do understand and try to parse it using available data. If we know that, in math, procedures are often named after those who formulate them, we can gather that each time a name is mentioned, some sort of procedure is brought into the context. The names themselves do not matter as much as the courses of action they connote; they are shorthand for what to do and how to go about it.
This does not make the specifics any clearer, to be sure. But when someone objects "but what about the [name] conjecture?", you are now clued in to the fact that there is something amiss with the proposed course of action, which needs to be addressed. The content of this objection is unknown, but the form of it is clear. Whatever comes next - be it an "oh, but the [name] postulate solves that" or a heartfelt "shit shit shit shit shit" - your act of active listening has provided you with some insight into what's going on.
The same goes for any context. There are always two conversations going on at any given time, where one might be more prominent than the other. There is the factual conversation where specifics are tossed around left and right, where knowing what's what helps tremendously. These facts are timeless, and can be grappled with later on, on their own terms. There is also a very time-sensitive conversation going on in the now, where everything is specific to the very moment it is happening. This is the realm of moods, postures, physical positioning within the room, hierarchy - everything that affects a situation without necessarily being explicitly mentioned by anyone involved.
Needless to say, the fact that it goes without saying does not mean it is unimportant.
There is an ideal out there that conversations ought to take place solely in terms of the first conversation. Putting ideas against each other and all that. It has the merit of being an ideal, but as an analytic approach to actual social situations, it leaves out too many relevant aspects to generate useful insight. The "shit shit shit" response above might be a response to the fact that the [name] conjecture makes the thing difficult to perform, but it might also be a response to the fact that the person in question was planning on going home early that day, and just had that very plan dashed to pieces right there and then. Merely thinking in terms of content leaves out the very real life implications of form.
Be sure to keep both ears open as you move through life. Arguments are very seldom about the things they are about, and there are cases where losing the argument in the first kind of conversation means winning in the second. You just have to know to listen for it.
Monday, July 23, 2018
Universal literacy and you
Every once in a while, I remind people that writing (and reading) is a technology. This might seem an obvious point, but it has a series of non-obvious implications. One of the most important implications is that no one is born literate, and everyone has to attain it somehow. Writing is not an intrinsic ability of human beings, but a technology that can be mastered through practice. There is no natural age at which literacy occurs; it's all culture.
If you consider this in the context of education, the implications become slightly more tangible. Especially with regards to standardized education, where everyone is supposed to achieve the same goals at the same time. On the one hand, there are organizational and administrative reasons for having a system like that; standardization brings interoperability and routine. On the other hand, it is easy to over time begin to view the goals as natural stages of development. By age x, the standardized child is supposed to know a, by age y b, and so on. Performance becomes both expected and measurable.
The thing about technologies is that they are not one size fits all. Like clothing (another technology), it fits differently on different bodies. Some can just put it on, no big deal, while others have to struggle to even get an elbow in. Everybody is different, and expecting everyone to conform to the same standards becomes something of a contradiction in terms. Or, to invoke Foucault, a power tool.
Literacy has the advantage of having a high adoption rate. A large proportion of everyone can attain some basic level of literacy with effort, enough to process the written word for functional purposes. A high adoption rate is still less than 100%, however, and there will inevitably be those who for various reasons are simply not cut out for it. Not because of personal defects or lack of effort, but because that's how statistics work. Even at an adoption rate of 99%, there will be a sizable number of non-adopters. By feat of statistics, the illiterate walk among us.
To be sure, this is not an either/or issue. There are a significant number of dyslectics in the world, who can do the reading but have to effort for it. This is a result of the same process; the technology simply does not sit right with how their bodies work.
By reminding people that writing is a technology, I perform the slightly violent act of recontextualizing illiterate persons from deficient to unfortunate. Being illiterate in a society which expects universal literacy is a massive disadvantage, no two ways about it. If writing is a technology with less than 100% adoption rate, however, those unfortunate souls who end up being born as non-compatible become an expected (indeed inevitable) side-effect of a policy choice, rather than malfunctioning individuals. It is not their fault that one of the major societal technological choices made happened to have the side-effect of excluding them.
Writing is a technology. It is an obvious point, with many non-obvious implications.
If you consider this in the context of education, the implications become slightly more tangible. Especially with regards to standardized education, where everyone is supposed to achieve the same goals at the same time. On the one hand, there are organizational and administrative reasons for having a system like that; standardization brings interoperability and routine. On the other hand, it is easy to over time begin to view the goals as natural stages of development. By age x, the standardized child is supposed to know a, by age y b, and so on. Performance becomes both expected and measurable.
The thing about technologies is that they are not one size fits all. Like clothing (another technology), it fits differently on different bodies. Some can just put it on, no big deal, while others have to struggle to even get an elbow in. Everybody is different, and expecting everyone to conform to the same standards becomes something of a contradiction in terms. Or, to invoke Foucault, a power tool.
Literacy has the advantage of having a high adoption rate. A large proportion of everyone can attain some basic level of literacy with effort, enough to process the written word for functional purposes. A high adoption rate is still less than 100%, however, and there will inevitably be those who for various reasons are simply not cut out for it. Not because of personal defects or lack of effort, but because that's how statistics work. Even at an adoption rate of 99%, there will be a sizable number of non-adopters. By feat of statistics, the illiterate walk among us.
To be sure, this is not an either/or issue. There are a significant number of dyslectics in the world, who can do the reading but have to effort for it. This is a result of the same process; the technology simply does not sit right with how their bodies work.
By reminding people that writing is a technology, I perform the slightly violent act of recontextualizing illiterate persons from deficient to unfortunate. Being illiterate in a society which expects universal literacy is a massive disadvantage, no two ways about it. If writing is a technology with less than 100% adoption rate, however, those unfortunate souls who end up being born as non-compatible become an expected (indeed inevitable) side-effect of a policy choice, rather than malfunctioning individuals. It is not their fault that one of the major societal technological choices made happened to have the side-effect of excluding them.
Writing is a technology. It is an obvious point, with many non-obvious implications.
Monday, July 9, 2018
The funny side of systematic literature reviews
I find myself thinking about systematic literature reviews these days. It is an unexpected thing to be randomly thinking about, to be sure, so I guess that means I'm officially an academic now. My habitus is augmented.
The quickest way to introduce systematic literature reviews is through a detour to unsystematic literature reviews. The unsystematic approach is easy to grasp: you simply grab a hold of any books or articles that seem relevant and start reading. At the other end of the reading process, you know more than you did before. This is generally a good way to go about learning (especially if you have a nice local library to draw from), and should not be underestimated.
It is not, however, systematic.
The lack of systematicity is something of a problem, though. Not to the learning process, mind, but to the performative aspect of being an academic. It is not cool or hip to say that you've read a lot of books and keep tabs on new articles in your field, and thus know a thing or two. This is not the image of a structured, rigorous and disciplined scientific mind that academia wants to project (both to itself and to the public), so something has to be done. A system has to be created, to let everyone involved claim that they followed proper procedure and did not leave things to chance. Thus, systematic literature reviews.
Depending on where you are in the process, the systematic approach can take many guises. If you are just learning about science and scientific literature, having a system in place to guide you through the reading is immensely helpful. It gives permission to look at a search result of 2931 articles and cut it down to a more manageable number. If it is a robust system, it specifies that search engines giveth what you asketh, and that you probably should be more specific in your search. Moreover, knowing which questions to ask the articles beforehand gives a structure to the reading, and allows for paying closer attention to the important parts. And so on, through all the steps. Having a template to follow answers a lot of questions, even if you find yourself deviating from it.
When you've been at being an academic for a while, the presence of an adopted system can shield you from the burden of overreading. There are always more books and articles than can be readily read, and every text ever written can be criticized on the basis of not taking something into account. By using the system, the age-old question of "why did you choose to include these texts but not these other texts" can finally be put to rest. The systematic literature review unburdens the load by defining exactly which texts are relevant and which are not. And thus, the rigorous and disciplined reading can commence, conscience clear.
Next up the abstraction ladder, we find another use of these systematic reviews. When research has to be summarized and administrated, it simply will not do to go with something as unscientific as a gut feeling. The scientists involved might know what's what, but this intricate insider knowledge is not easily translated into something outsiders can partake of. Outsiders, such as the non-scientist bureaucrats put in place to administrate the funding mechanisms that determine which research efforts are awarded grants and which do not. By strategically employing review systems that include desired results (and exclude undesired results), funding can be directed in certain directions under the guise of impartial systematicity. Administrators (or their superiors) can claim all the academic benefits of rigorously following the method laid out for all to see, while at the same time subtly steering research efforts without having to be explicit about it. It is systematic, disciplined and impartial, whilst also being ruthlessly political.
The key takeaway here is not that systematic literature reviews are bad (problematic, maybe, but not bad). Rather, it is a reminder that the presence of a system does not in itself guarantee a robust outcome. Like all methodologies, there are strengths and weaknesses to consider in each particular case, sometimes more obvious than not. When a systematic review finds that only articles published by (say) economists are relevant to a particular issue, despite decades of scholarly publishing on the subject on other disciplines, the issue is not a lack of systematicity, but too much of it. A flawless execution of review methodology does not preclude asking what is up with such unrepresentative results.
I find it amusing that strategic and rhetorical dimensions of academia are obscured by reference to systematicity and specialized vocabulary (the terminology surrounding systematic literature reviews is something to behold). Not least because academics are the very people best positioned to problematize the living bejeezus out of just these kinds of subtle processes.
It's funny that way.
The quickest way to introduce systematic literature reviews is through a detour to unsystematic literature reviews. The unsystematic approach is easy to grasp: you simply grab a hold of any books or articles that seem relevant and start reading. At the other end of the reading process, you know more than you did before. This is generally a good way to go about learning (especially if you have a nice local library to draw from), and should not be underestimated.
It is not, however, systematic.
The lack of systematicity is something of a problem, though. Not to the learning process, mind, but to the performative aspect of being an academic. It is not cool or hip to say that you've read a lot of books and keep tabs on new articles in your field, and thus know a thing or two. This is not the image of a structured, rigorous and disciplined scientific mind that academia wants to project (both to itself and to the public), so something has to be done. A system has to be created, to let everyone involved claim that they followed proper procedure and did not leave things to chance. Thus, systematic literature reviews.
Depending on where you are in the process, the systematic approach can take many guises. If you are just learning about science and scientific literature, having a system in place to guide you through the reading is immensely helpful. It gives permission to look at a search result of 2931 articles and cut it down to a more manageable number. If it is a robust system, it specifies that search engines giveth what you asketh, and that you probably should be more specific in your search. Moreover, knowing which questions to ask the articles beforehand gives a structure to the reading, and allows for paying closer attention to the important parts. And so on, through all the steps. Having a template to follow answers a lot of questions, even if you find yourself deviating from it.
When you've been at being an academic for a while, the presence of an adopted system can shield you from the burden of overreading. There are always more books and articles than can be readily read, and every text ever written can be criticized on the basis of not taking something into account. By using the system, the age-old question of "why did you choose to include these texts but not these other texts" can finally be put to rest. The systematic literature review unburdens the load by defining exactly which texts are relevant and which are not. And thus, the rigorous and disciplined reading can commence, conscience clear.
Next up the abstraction ladder, we find another use of these systematic reviews. When research has to be summarized and administrated, it simply will not do to go with something as unscientific as a gut feeling. The scientists involved might know what's what, but this intricate insider knowledge is not easily translated into something outsiders can partake of. Outsiders, such as the non-scientist bureaucrats put in place to administrate the funding mechanisms that determine which research efforts are awarded grants and which do not. By strategically employing review systems that include desired results (and exclude undesired results), funding can be directed in certain directions under the guise of impartial systematicity. Administrators (or their superiors) can claim all the academic benefits of rigorously following the method laid out for all to see, while at the same time subtly steering research efforts without having to be explicit about it. It is systematic, disciplined and impartial, whilst also being ruthlessly political.
The key takeaway here is not that systematic literature reviews are bad (problematic, maybe, but not bad). Rather, it is a reminder that the presence of a system does not in itself guarantee a robust outcome. Like all methodologies, there are strengths and weaknesses to consider in each particular case, sometimes more obvious than not. When a systematic review finds that only articles published by (say) economists are relevant to a particular issue, despite decades of scholarly publishing on the subject on other disciplines, the issue is not a lack of systematicity, but too much of it. A flawless execution of review methodology does not preclude asking what is up with such unrepresentative results.
I find it amusing that strategic and rhetorical dimensions of academia are obscured by reference to systematicity and specialized vocabulary (the terminology surrounding systematic literature reviews is something to behold). Not least because academics are the very people best positioned to problematize the living bejeezus out of just these kinds of subtle processes.
It's funny that way.
Sunday, April 29, 2018
What Marx can tell you about creating youtube videos
The first association that leaps to mind when someone mentions youtube is probably not Marx. In fact, he is probably not among the top five or the top fifty. Which is understandable, given that Marx is something of an 1850s guy and youtube is not very 1850s at all. The line between these things is not altogether clear.
Unless, of course, you are a David Harvey fan, and have listened to his series of youtube lectures on the man.
Those who have dabbled in creating youtube videos know that it is difficult to predict just how many viewers a video will get. There tends to be an average number, and some variations up and down for the most part. Then, seemingly for no reason, there are videos that get far more viewers than the others. Seen in context, they are the same as the other videos, except that something funneled viewers into that one video in particular. With enough sifting through the stats, it is sometimes possible to figure out what's up; if you do, then that is useful information.
Anecdotal evidence has it that it is usually the videos that took the least effort to make that wins this accidental lottery. Conversely, those videos which take hours upon hours to produce tend to remain at their usual levels of viewers - possibly slightly fewer, just out some spiteful statistical quirk. This perceived inverse relationship between effort and outcome is probably just imaginary, but it's easy to feel that it would be better if viewers flocked to the effort-intense video rather than to the throwaway two minute thingamabob. If viewers are only gonna see the one video, then it might as well be one of the good ones.
As it happens, Marx has something to say on the matter of the relation between effort and results. Specifically, he talks about socially necessary labor time. It is a very technical concept, given that you have to understand what "socially", "necessary", "labor" and "time" are defined as in order to really get the full story. The short of it is thus: if it takes you five hours to make a pair of shoes, and your competitor can crank out thousands of those same shoes in the same span of time, the market value of the pair you made does not go up because it took you a lot of time and effort. Consequently, any attempt to sell them at a price that corresponds to your time and energy invested will fall flat, given that there are other shoes sold for cheaper.
This has consequences for your career as a shoemaker, as you might imagine.
It also has consequences for all shoemakers. The competitor who put you out of the shoe business has to face the same dynamic. He can crank out shoes by the thousands, but if someone comes along who can produce tens of thousands of shoes in the same span of time, this is going to be an issue. The same dynamic that made it unfeasible to make one pair of shoes at a time, also make it unfeasible to remain someone who merely produces thousands of shoes. The sheer amount of shoes will drive down prices until it becomes an economically sound idea to either upgrade the production line or move into another line of work.
If you are tempted to say that this is why capitalism is good, seeing the immense amount of shoes it produces, Marx would agree with you.
The main point of the concept of socially necessary labor time is to decouple personal effort from market outcomes. As you can see in the example of shoemaking, the price someone is willing to pay for a pair is not based on personal factors; there is an impersonal dynamic at work beyond any one person's capacity to control. Those who want to compete in the shoe market, have to produce shoes in such a way that it makes sense in terms of price and production capacity. One pair every five hours simply will not cut it, even if you worked really hard at it.
The same goes for youtube videos, in two important ways. The first is most obvious, so let's get at it first: expending vast quantities of time and effort into producing a video does not guarantee that viewers will flock to it. There is always a risk that you are making the youtube equivalent of those five-hour shoes, and it does not reflect badly upon you if this turns out to be the case. It's in the nature of the game.
Less obvious, but equally as important, is what this tells us about those who crank out videos at an alarming rate without investing too heavily into the research or production quality departments. It is easy to become resentful and mutter about the unfairness of it all, where hard work is left unrewarded in favor of these clowns. This is not a useful state of mind, however, nor is it a useful analysis. Instead, it makes more sense to see it as an instance of socially necessary labor time: apparently, this is how many videos you have to crank out in order to remain competitive, even if these videos end up containing easily preventable errors and mistakes.
If you are tempted to say that this is why capitalism is bad, seeing the immense amount of shoddy videos it produces, Marx would anachronistically agree with you.
To reiterate: the point here is to decouple effort and market outcomes. Working hard in the sweat of your brow is not a reward in itself, nor does it guarantee that viewers will show up. Finding ways to streamline your process and make time for other things (or to do things better) is not cheating, it's just efficient. Moreover, being hard at work does not mean viewers owe you anything; being resentful that they are not appreciative enough of your efforts will not help you going forward. Conversely, if it turns out viewers really like that throwaway two minute thingamabob, then that is useful information.
Needless to say, if this goes for shoemaking and making youtube videos, then the notion of socially necessary labor time probably goes for a lot of other things as well.
Marx is sneaky like that.
Unless, of course, you are a David Harvey fan, and have listened to his series of youtube lectures on the man.
Those who have dabbled in creating youtube videos know that it is difficult to predict just how many viewers a video will get. There tends to be an average number, and some variations up and down for the most part. Then, seemingly for no reason, there are videos that get far more viewers than the others. Seen in context, they are the same as the other videos, except that something funneled viewers into that one video in particular. With enough sifting through the stats, it is sometimes possible to figure out what's up; if you do, then that is useful information.
Anecdotal evidence has it that it is usually the videos that took the least effort to make that wins this accidental lottery. Conversely, those videos which take hours upon hours to produce tend to remain at their usual levels of viewers - possibly slightly fewer, just out some spiteful statistical quirk. This perceived inverse relationship between effort and outcome is probably just imaginary, but it's easy to feel that it would be better if viewers flocked to the effort-intense video rather than to the throwaway two minute thingamabob. If viewers are only gonna see the one video, then it might as well be one of the good ones.
As it happens, Marx has something to say on the matter of the relation between effort and results. Specifically, he talks about socially necessary labor time. It is a very technical concept, given that you have to understand what "socially", "necessary", "labor" and "time" are defined as in order to really get the full story. The short of it is thus: if it takes you five hours to make a pair of shoes, and your competitor can crank out thousands of those same shoes in the same span of time, the market value of the pair you made does not go up because it took you a lot of time and effort. Consequently, any attempt to sell them at a price that corresponds to your time and energy invested will fall flat, given that there are other shoes sold for cheaper.
This has consequences for your career as a shoemaker, as you might imagine.
It also has consequences for all shoemakers. The competitor who put you out of the shoe business has to face the same dynamic. He can crank out shoes by the thousands, but if someone comes along who can produce tens of thousands of shoes in the same span of time, this is going to be an issue. The same dynamic that made it unfeasible to make one pair of shoes at a time, also make it unfeasible to remain someone who merely produces thousands of shoes. The sheer amount of shoes will drive down prices until it becomes an economically sound idea to either upgrade the production line or move into another line of work.
If you are tempted to say that this is why capitalism is good, seeing the immense amount of shoes it produces, Marx would agree with you.
The main point of the concept of socially necessary labor time is to decouple personal effort from market outcomes. As you can see in the example of shoemaking, the price someone is willing to pay for a pair is not based on personal factors; there is an impersonal dynamic at work beyond any one person's capacity to control. Those who want to compete in the shoe market, have to produce shoes in such a way that it makes sense in terms of price and production capacity. One pair every five hours simply will not cut it, even if you worked really hard at it.
The same goes for youtube videos, in two important ways. The first is most obvious, so let's get at it first: expending vast quantities of time and effort into producing a video does not guarantee that viewers will flock to it. There is always a risk that you are making the youtube equivalent of those five-hour shoes, and it does not reflect badly upon you if this turns out to be the case. It's in the nature of the game.
Less obvious, but equally as important, is what this tells us about those who crank out videos at an alarming rate without investing too heavily into the research or production quality departments. It is easy to become resentful and mutter about the unfairness of it all, where hard work is left unrewarded in favor of these clowns. This is not a useful state of mind, however, nor is it a useful analysis. Instead, it makes more sense to see it as an instance of socially necessary labor time: apparently, this is how many videos you have to crank out in order to remain competitive, even if these videos end up containing easily preventable errors and mistakes.
If you are tempted to say that this is why capitalism is bad, seeing the immense amount of shoddy videos it produces, Marx would anachronistically agree with you.
To reiterate: the point here is to decouple effort and market outcomes. Working hard in the sweat of your brow is not a reward in itself, nor does it guarantee that viewers will show up. Finding ways to streamline your process and make time for other things (or to do things better) is not cheating, it's just efficient. Moreover, being hard at work does not mean viewers owe you anything; being resentful that they are not appreciative enough of your efforts will not help you going forward. Conversely, if it turns out viewers really like that throwaway two minute thingamabob, then that is useful information.
Needless to say, if this goes for shoemaking and making youtube videos, then the notion of socially necessary labor time probably goes for a lot of other things as well.
Marx is sneaky like that.
Tuesday, April 10, 2018
Musing on being less on Twitter
Over the recent months, I have found myself looking less and less at Twitter. This manifests itself in many forms, the most dramatic being that I nowadays only occasionally turn on my middle monitor, which main use is to display a never ending live-updating stream of tweets flowing like a less stylized version of the Matrix. The monitor just stands there, a black mirror in portrait mode.
The strange thing is that Janetter - my ancient twitter client that new users can not run due to long-forgotten arbitrary API limits - still runs, in preparation for the ever rarer occasions when I turn the monitor on just to see the flow of tweets again. As if closing the program would be some kind of definite gesture, irrevocable once performed.
Less strange is that I find my thinking has changed. This is to be expected - as Byung-Chul Han noted, it is difficult to focus during a noisy party. But it is also more subtle than simply having less input to process. I find that I direct myself towards different company. Even if I were to think about something that happened to be trending on Twitter right this very instant, it would be from a different starting point, with different aims.
"Company" is the key term here, I suspect. Booth uses it to muse on the fact that we spend time in someone's company when we read their words, and conversely become company as others read ours. The quality of our company, both reading and writing, in many ways shape who we are, and who we try to be. Good company inspires upwards, while bad company keeps you down.
In more Twitter-related terms, this manifests as an implicit demand to become company to those we follow and those who follow us. As we think through the issues introduced and reiterated by those in our timelines, we ever so gradually come to feel the pressure to add our own thoughts to the flow. After seeing fifteen tweets about something, it becomes almost a knee-jerk reaction to write a sixteenth. Even if we only just heard about something mere minutes ago, we feel compelled to have said something about it.
This dynamic creates a very specific and other-directed way of thinking. You build up a sensitivity to trends and keywords, and act on what you see. Others see this as well, and react to your reactions; the fact that you both see and react to the same things is an immense sense of community; it is sometimes referred to as social media validation. It is company, good or bad.
This thinking is like riding a bike, though. True, once learned, you do not forget it. But if you've not rode a bike in a while, there is a strong possibility that the muscles used to pedal things forward have become less muscular than you remember, and thus the going is slower than it used to be. You still know what to look for - the trends, the keywords, the subtweets - but it is an effort to care. An uphill effort, to combine metaphors.
Thus, on the ever rarer occasions when I power up my middle monitor, I see what is going on and how it unfolds. The impulse to contribute to the goings on and insert myself into the company, however, is not strong enough for me to do it as often and as energetically as I used to. I'm simply not in that frame of mind any more. My thoughts and words are directed elsewhere.
It is only prudent that I mention this somewhere. For future reference.
The strange thing is that Janetter - my ancient twitter client that new users can not run due to long-forgotten arbitrary API limits - still runs, in preparation for the ever rarer occasions when I turn the monitor on just to see the flow of tweets again. As if closing the program would be some kind of definite gesture, irrevocable once performed.
Less strange is that I find my thinking has changed. This is to be expected - as Byung-Chul Han noted, it is difficult to focus during a noisy party. But it is also more subtle than simply having less input to process. I find that I direct myself towards different company. Even if I were to think about something that happened to be trending on Twitter right this very instant, it would be from a different starting point, with different aims.
"Company" is the key term here, I suspect. Booth uses it to muse on the fact that we spend time in someone's company when we read their words, and conversely become company as others read ours. The quality of our company, both reading and writing, in many ways shape who we are, and who we try to be. Good company inspires upwards, while bad company keeps you down.
In more Twitter-related terms, this manifests as an implicit demand to become company to those we follow and those who follow us. As we think through the issues introduced and reiterated by those in our timelines, we ever so gradually come to feel the pressure to add our own thoughts to the flow. After seeing fifteen tweets about something, it becomes almost a knee-jerk reaction to write a sixteenth. Even if we only just heard about something mere minutes ago, we feel compelled to have said something about it.
This dynamic creates a very specific and other-directed way of thinking. You build up a sensitivity to trends and keywords, and act on what you see. Others see this as well, and react to your reactions; the fact that you both see and react to the same things is an immense sense of community; it is sometimes referred to as social media validation. It is company, good or bad.
This thinking is like riding a bike, though. True, once learned, you do not forget it. But if you've not rode a bike in a while, there is a strong possibility that the muscles used to pedal things forward have become less muscular than you remember, and thus the going is slower than it used to be. You still know what to look for - the trends, the keywords, the subtweets - but it is an effort to care. An uphill effort, to combine metaphors.
Thus, on the ever rarer occasions when I power up my middle monitor, I see what is going on and how it unfolds. The impulse to contribute to the goings on and insert myself into the company, however, is not strong enough for me to do it as often and as energetically as I used to. I'm simply not in that frame of mind any more. My thoughts and words are directed elsewhere.
It is only prudent that I mention this somewhere. For future reference.
Thursday, March 22, 2018
That thesis I wrote about Patreon
I wrote a thesis about Patreon.
There are different ways of going about writing a thesis about Patreon. An intuitive approach would be an instrumental, goal-oriented investigation as to which strategies work and which do not. The findings of such an investigation could then be distilled into a simple list of do's and don'ts, which readers could implement in short order and (probably, maybe, hopefully) generate more revenue.
I did not write that kind of thesis. If you came here looking for simple, straightforward advice about how to run your Patreon page, then this wall of text is not for you. (Neither are the posts about my my other theses, for that matter, despite them all relating to each other in interesting ways.)
What I did was seemingly simple. I asked a straightforward question, and saw where it took me. The question was thus: what is Patreon, and what does being on it do to you?
As with all straightforward questions, the answer turns out to be everything but clear cut and easy to summarize. In order to answer it, we have to answer a couple of sub-questions first, just to make sure everyone is on the same page.
Seeing as this was a thesis in Rhetoric (Americans call it Composition and/or Speech; the discipline has different names depending on where you happen to be geographically), the first of these sub-questions is what we mean by "rhetoric". To summarize hundreds of years of back and forths, there are two main answers to this question. The first is the (neo-)Aristotelian answer that it is the art of finding the best possible means of convincing someone in a particular situation. In this case, rhetoric would be a set of strategies for maximizing Patreon donations, with varying degrees of excellence in execution. The other answer looks at the situation as a whole and asks what it implies for those who participate in it, and if things could be done differently. In this case, rhetoric consists of analyzing what it means to have a Patreon page, which implicit assumptions inform interactions on this page, and how these assumptions might lead to outcomes that were neither expected nor beneficial for the participants.
As you might have gleaned from the gist of things, my thesis fell firmly into the latter category. Hence the lack of simple, straightforward advice in list form.
We need to keep the different kinds of rhetoric in mind, as the difference between them tells us something about what goes on with regards to Patreon. Specifically, we shall look at the concept of "ethos" and how it plays out differently in the two paradigms.
In the (neo-)Aristotelian framework, ethos is a means of persuasion. The word "ethos" connotes everything that is related to the person doing the talking, and how these aspects of self are being used to convince the audience to do something. In this case, the "something" is donating. There are many possible means, depending on who is doing the asking for donations. For instance, various ailments or difficulties can be leveraged to generate sympathy, which creates a willingness to donate. Similarly, skills can be leveraged to show how donations go towards new projects (e.g. donate so I can afford to make a new movie or whatever). Or a common goal can be invoked, along with a more or less defined correlation between donating and achieving this goal (e.g. most fundraisers and charity drives). And so on and so forth. In short, ethos is a means to an end, and it is used as such.
In the more modern framework, "ethos" is more akin to "ethics", in that it connotes a way of being in the world. It is not as directly interested in solving the problem at hand, as it is in understanding the communicative process in a wider context. For instance, it does not see communication in terms of problems to solve (in this case, how to get people to donate), but rather as a series of interactions which generate certain expectations on future interactions. It also emphasizes the role of choice on the part of the person doing the communication - they can choose to present themselves this way or that, and they do so on the basis of available knowledge and ethical propensities. A person does not present themselves in a certain way only in order to solve a problem, but also as a way of being in the world. A Patreon page is not just an invitation to donate - it is also a declaration: this is who I am and what I do.
This might seem like a subtle difference, and it is. Thus, an example is in order, to put the two perspectives in perspective.
Let's say we have a rhetor without any particular political opinions one way or the other. One day, he (let's make it a he) stumbles upon an alt-right blog, and notices two things. First, that it gets a lot of donations. Second, that it is very formulaic and uncreative, and mostly posts the same things over and over and over again with minor variations. Based on these two observations, he decides to hack the process and start his own blog in a similar vein. Not because he agrees with the opinions expressed, but because it seems an easier way to get an income than doing more labor-intensive work. After a while, his low-effort blog gets noticed by the true believers, and the donation money starts to pour in. Seeing as it works, he puts a little more effort into it, and eventually finds himself being a part of this political ecology. Not because he believes in what he writes, but because the donation money keeps coming his way.
Seen through the (neo-)Aristotelian framework, he has solved the problem. By presenting himself as someone who holds these particular beliefs, he manages to persuade his audience to donate money. He has succeeded with what he set out to do, and his audience is happy to see him keep at it.
As you might imagine, the modern framework is less than sympathetic to this course of action. For one, he uses his powers of rhetoric to exploit those who are vulnerable to this kind of industrially produced propaganda, in a sense preying on the weak. For another, his participation in this political milieu reinforces its message and makes it a more prevalent presence in the online spaces he frequents; there is strength in numbers, and he now numbers among them. Moreover, this is not the best use of his rhetorical skills, and he could contribute better things to the world than a low-effort repetition of insincerely held opinions.
In the former case, our fictive rhetor makes good use of ethos, as he manages to present himself as a fellow extremist, thus getting his audience to donate. In the latter, he fails his ethical obligation to be a good person whose presence in the world makes a positive difference when all is said and done. He has not been good company.
If you have read this far, you might have thought that we have moved rather orthogonally with regards to what Patreon is and how being on it affects its users. But I reckon you also understand why simply asking what to do in order to make donations happen is insufficient in order to understand what is going on. It is more than merely a quest to maximize the monthly donations, and the analysis has to widen in order to take all the relevant aspects into account.
With this in mind, we can pose the question of what Patreon is. In the simplest terms possible, it is a web site that allows people to ask for money from other people. Patreon also provides an economic infrastructure for getting said donations from here to there. Anyone can create a Patreon page and ask for donations on it. Moreover, they can present themselves in whatever terms they like in order to make these donations happen. This is, in short, it.
(To be sure, there are certain limitations as to who is allowed on the site, mostly relating to contradictory US social values. In order to keep things brief, I'm going to gloss over this fact with the quickness.)
This presents us with an interesting rhetorical situation. On the one hand, Patreon users are free to define themselves however they like, applying every bit of autonomy and rhetorical prowess they can muster. On the other hand, the very act of being on Patreon is a message in and of itself. Patreon exists to facilitate donations, and anyone who has a page is asking for such donations - even if they do not write anything on their page at all. There is communication going on between the lines whether the user acknowledges it or not. At the end of the day, a Patreon page is a Patreon page.
During the course of my thesis writing, I identified three strategies (broadly defined) for writing a Patreon page. Here, I present them in falling order of popularity.
The most common strategy is to describe what happens when someone donates. This is heavily encouraged through the system of rewards and goals; if an individual donates x amount of dollars, they get a reward, and if the accumulated donations reach a certain level, some action which could not previously be performed will now be performed. In this way, the relationship between the parties involved is well defined: everyone knows what will happen, and donors can weigh their options before choosing a course of action.
Another common strategy is to not have rewards, but to frame donations as encouragements to keep whatever activity is at hand going. The donation becomes its own reward, as it were. There are still overall goals (e.g. at x amount of total donations there will be an upgrade of recording equipment) but individuals are not rewarded above and beyond knowing that the thing they enjoy can keep doing its thing.
A less common strategy is to flat out not reward donations at all, but accept them nevertheless. This might be done for tax reasons (some legislations exempt gifts from taxation, and explicitly not giving anything in return qualifies the exchange as a gift rather than a business transaction). They might also do it to avoid getting into a situation where gratitude is required (those who choose to donate even though they know they will receive nothing in return know that this is not a purchase). Or it might simply be because the user simply can't be bothered to think of something to write. There are no goals, no rewards, but the option to donate is open nevertheless.
It would seem at first glance that this last strategy is counter to the whole concept of having a donation page. But - as we saw earlier - simply having a Patreon page is a message in and of itself, and sometimes this is enough to get the point across.
All of these strategies deal with the tension between freedom and autonomy. Freedom means doing what you want to do, while autonomy means defining your own laws (or, in this case, your own goals). The tension comes into being whenever you want to do something that requires more effort than simply doing it. For instance, reading a book requires that you keep reading until you've read all you decided to read. At any point you are free to stop reading, but if you want to finish the reading, you have to make the decision to limit your range of options until it is completed. If you set a goal for yourself, you also have to discipline yourself until the goal is achieved.
The tension here is that both freedom and autonomy are limitations of each other. The defining characteristic of autonomy is that you choose your own rules and goals. Once you set upon the path of realizing the chosen course of action, however, you must limit yourself to doing the things that lead to attaining the goal. Not because someone else tells you to, but because this is what you decided to do. Whether it happens to be reading a book, finishing an education, or performing some other feat, the dynamic remains the same: once your decision has been made, you have to stick to it. Even if you at times feel like doing something else.
An example of this (to stick with the literary theme) is writing a book. The only way to finish it is to sit down and write. It might be tempting to go outside to enjoy the nice weather, or binge watch all seasons of Buffy, or go hang out with friends. At all points in time, you are free to go do these things. But if you ever want to finish that book - the goal that you, by your own volition, set for yourself - you have to set these freedoms aside and focus upon the task of writing.
Looking back on the three strategies outlined above, we can see how the tension plays out in each of them. The third strategy - that of not rewarding patrons - maximizes the amount of freedom in the relationship between parties. No reward is given, no reward is expected, and donations keep happening in so far as the donors find it in their interest to continue. The creator, for their part, can choose whichever creative direction they desire, unburdened by expectations and obligations. What you see is 100% what you get, take it or leave it.
This can be contrasted with the first strategy, that of giving specific rewards to everyone who donates a particular amount of money. Here, autonomy is maximized, is as far as the creator can choose which rewards are awarded at which levels of donation. However, over time, this might lead to the creators finding themselves spending more time than initially expected making sure that donors get their just rewards. Making a donation is, in a sense, to enter into a contract, and it is up to the creators to live up to their part of the bargain. The freedom of the present is bound by autonomy expressed in the past. (Whether this is a productive relationship between creators and donors, or an inescapable iron cage where next month's rent depends on cranking out yet another unit, is always a contextual question.)
The middle strategy is, of course, a combination of the two. A degree of freedom is maintained, but if donations reach a certain level, something will happen. This something, while it is not a reward or contract in the same way as we saw above, is still a promise, and as such brings with it the obligation to fulfill it. (If nothing else, it looks - and sounds - bad if the audio recording equipment has not been upgraded for months and months after reaching the goal.)
I should stress that there is nothing inherently wrong with aiming for either autonomy or freedom in these matters. The point of this wall of text is not to say that you should do either instead of the other. Rather, the point - the thesis, as it were - is that you ought to make an informed choice when you create a Patreon page, and write it in such a way that you can live with who you potentially become. Giving lots of rewards is labor-intensive, but it is also an efficient strategy to get those donations to happen. Conversely, you might find that your creative efforts are hampered by the amount of extra effort you have accidentally committed yourself to. It all depends on who you are and what you are about.
Seen in this light, we are rapidly approaching an answer to the question of what Patreon is and how it affects its users. Moreover, we are able to ask new and interesting questions with regards to the ethos/ethics of online donation services. Given that Patreon users are free to define themselves and what they do (and for how much money this will be done), the tension between freedom and autonomy becomes front and center. Having a Patreon page becomes not only a way of asking for money, it also becomes an act of self-definition: this is who I am and what I do.
So. Donate to my Patreon, maybe?
There are different ways of going about writing a thesis about Patreon. An intuitive approach would be an instrumental, goal-oriented investigation as to which strategies work and which do not. The findings of such an investigation could then be distilled into a simple list of do's and don'ts, which readers could implement in short order and (probably, maybe, hopefully) generate more revenue.
I did not write that kind of thesis. If you came here looking for simple, straightforward advice about how to run your Patreon page, then this wall of text is not for you. (Neither are the posts about my my other theses, for that matter, despite them all relating to each other in interesting ways.)
What I did was seemingly simple. I asked a straightforward question, and saw where it took me. The question was thus: what is Patreon, and what does being on it do to you?
As with all straightforward questions, the answer turns out to be everything but clear cut and easy to summarize. In order to answer it, we have to answer a couple of sub-questions first, just to make sure everyone is on the same page.
Seeing as this was a thesis in Rhetoric (Americans call it Composition and/or Speech; the discipline has different names depending on where you happen to be geographically), the first of these sub-questions is what we mean by "rhetoric". To summarize hundreds of years of back and forths, there are two main answers to this question. The first is the (neo-)Aristotelian answer that it is the art of finding the best possible means of convincing someone in a particular situation. In this case, rhetoric would be a set of strategies for maximizing Patreon donations, with varying degrees of excellence in execution. The other answer looks at the situation as a whole and asks what it implies for those who participate in it, and if things could be done differently. In this case, rhetoric consists of analyzing what it means to have a Patreon page, which implicit assumptions inform interactions on this page, and how these assumptions might lead to outcomes that were neither expected nor beneficial for the participants.
As you might have gleaned from the gist of things, my thesis fell firmly into the latter category. Hence the lack of simple, straightforward advice in list form.
We need to keep the different kinds of rhetoric in mind, as the difference between them tells us something about what goes on with regards to Patreon. Specifically, we shall look at the concept of "ethos" and how it plays out differently in the two paradigms.
In the (neo-)Aristotelian framework, ethos is a means of persuasion. The word "ethos" connotes everything that is related to the person doing the talking, and how these aspects of self are being used to convince the audience to do something. In this case, the "something" is donating. There are many possible means, depending on who is doing the asking for donations. For instance, various ailments or difficulties can be leveraged to generate sympathy, which creates a willingness to donate. Similarly, skills can be leveraged to show how donations go towards new projects (e.g. donate so I can afford to make a new movie or whatever). Or a common goal can be invoked, along with a more or less defined correlation between donating and achieving this goal (e.g. most fundraisers and charity drives). And so on and so forth. In short, ethos is a means to an end, and it is used as such.
In the more modern framework, "ethos" is more akin to "ethics", in that it connotes a way of being in the world. It is not as directly interested in solving the problem at hand, as it is in understanding the communicative process in a wider context. For instance, it does not see communication in terms of problems to solve (in this case, how to get people to donate), but rather as a series of interactions which generate certain expectations on future interactions. It also emphasizes the role of choice on the part of the person doing the communication - they can choose to present themselves this way or that, and they do so on the basis of available knowledge and ethical propensities. A person does not present themselves in a certain way only in order to solve a problem, but also as a way of being in the world. A Patreon page is not just an invitation to donate - it is also a declaration: this is who I am and what I do.
This might seem like a subtle difference, and it is. Thus, an example is in order, to put the two perspectives in perspective.
Let's say we have a rhetor without any particular political opinions one way or the other. One day, he (let's make it a he) stumbles upon an alt-right blog, and notices two things. First, that it gets a lot of donations. Second, that it is very formulaic and uncreative, and mostly posts the same things over and over and over again with minor variations. Based on these two observations, he decides to hack the process and start his own blog in a similar vein. Not because he agrees with the opinions expressed, but because it seems an easier way to get an income than doing more labor-intensive work. After a while, his low-effort blog gets noticed by the true believers, and the donation money starts to pour in. Seeing as it works, he puts a little more effort into it, and eventually finds himself being a part of this political ecology. Not because he believes in what he writes, but because the donation money keeps coming his way.
Seen through the (neo-)Aristotelian framework, he has solved the problem. By presenting himself as someone who holds these particular beliefs, he manages to persuade his audience to donate money. He has succeeded with what he set out to do, and his audience is happy to see him keep at it.
As you might imagine, the modern framework is less than sympathetic to this course of action. For one, he uses his powers of rhetoric to exploit those who are vulnerable to this kind of industrially produced propaganda, in a sense preying on the weak. For another, his participation in this political milieu reinforces its message and makes it a more prevalent presence in the online spaces he frequents; there is strength in numbers, and he now numbers among them. Moreover, this is not the best use of his rhetorical skills, and he could contribute better things to the world than a low-effort repetition of insincerely held opinions.
In the former case, our fictive rhetor makes good use of ethos, as he manages to present himself as a fellow extremist, thus getting his audience to donate. In the latter, he fails his ethical obligation to be a good person whose presence in the world makes a positive difference when all is said and done. He has not been good company.
If you have read this far, you might have thought that we have moved rather orthogonally with regards to what Patreon is and how being on it affects its users. But I reckon you also understand why simply asking what to do in order to make donations happen is insufficient in order to understand what is going on. It is more than merely a quest to maximize the monthly donations, and the analysis has to widen in order to take all the relevant aspects into account.
With this in mind, we can pose the question of what Patreon is. In the simplest terms possible, it is a web site that allows people to ask for money from other people. Patreon also provides an economic infrastructure for getting said donations from here to there. Anyone can create a Patreon page and ask for donations on it. Moreover, they can present themselves in whatever terms they like in order to make these donations happen. This is, in short, it.
(To be sure, there are certain limitations as to who is allowed on the site, mostly relating to contradictory US social values. In order to keep things brief, I'm going to gloss over this fact with the quickness.)
This presents us with an interesting rhetorical situation. On the one hand, Patreon users are free to define themselves however they like, applying every bit of autonomy and rhetorical prowess they can muster. On the other hand, the very act of being on Patreon is a message in and of itself. Patreon exists to facilitate donations, and anyone who has a page is asking for such donations - even if they do not write anything on their page at all. There is communication going on between the lines whether the user acknowledges it or not. At the end of the day, a Patreon page is a Patreon page.
During the course of my thesis writing, I identified three strategies (broadly defined) for writing a Patreon page. Here, I present them in falling order of popularity.
The most common strategy is to describe what happens when someone donates. This is heavily encouraged through the system of rewards and goals; if an individual donates x amount of dollars, they get a reward, and if the accumulated donations reach a certain level, some action which could not previously be performed will now be performed. In this way, the relationship between the parties involved is well defined: everyone knows what will happen, and donors can weigh their options before choosing a course of action.
Another common strategy is to not have rewards, but to frame donations as encouragements to keep whatever activity is at hand going. The donation becomes its own reward, as it were. There are still overall goals (e.g. at x amount of total donations there will be an upgrade of recording equipment) but individuals are not rewarded above and beyond knowing that the thing they enjoy can keep doing its thing.
A less common strategy is to flat out not reward donations at all, but accept them nevertheless. This might be done for tax reasons (some legislations exempt gifts from taxation, and explicitly not giving anything in return qualifies the exchange as a gift rather than a business transaction). They might also do it to avoid getting into a situation where gratitude is required (those who choose to donate even though they know they will receive nothing in return know that this is not a purchase). Or it might simply be because the user simply can't be bothered to think of something to write. There are no goals, no rewards, but the option to donate is open nevertheless.
It would seem at first glance that this last strategy is counter to the whole concept of having a donation page. But - as we saw earlier - simply having a Patreon page is a message in and of itself, and sometimes this is enough to get the point across.
All of these strategies deal with the tension between freedom and autonomy. Freedom means doing what you want to do, while autonomy means defining your own laws (or, in this case, your own goals). The tension comes into being whenever you want to do something that requires more effort than simply doing it. For instance, reading a book requires that you keep reading until you've read all you decided to read. At any point you are free to stop reading, but if you want to finish the reading, you have to make the decision to limit your range of options until it is completed. If you set a goal for yourself, you also have to discipline yourself until the goal is achieved.
The tension here is that both freedom and autonomy are limitations of each other. The defining characteristic of autonomy is that you choose your own rules and goals. Once you set upon the path of realizing the chosen course of action, however, you must limit yourself to doing the things that lead to attaining the goal. Not because someone else tells you to, but because this is what you decided to do. Whether it happens to be reading a book, finishing an education, or performing some other feat, the dynamic remains the same: once your decision has been made, you have to stick to it. Even if you at times feel like doing something else.
An example of this (to stick with the literary theme) is writing a book. The only way to finish it is to sit down and write. It might be tempting to go outside to enjoy the nice weather, or binge watch all seasons of Buffy, or go hang out with friends. At all points in time, you are free to go do these things. But if you ever want to finish that book - the goal that you, by your own volition, set for yourself - you have to set these freedoms aside and focus upon the task of writing.
Looking back on the three strategies outlined above, we can see how the tension plays out in each of them. The third strategy - that of not rewarding patrons - maximizes the amount of freedom in the relationship between parties. No reward is given, no reward is expected, and donations keep happening in so far as the donors find it in their interest to continue. The creator, for their part, can choose whichever creative direction they desire, unburdened by expectations and obligations. What you see is 100% what you get, take it or leave it.
This can be contrasted with the first strategy, that of giving specific rewards to everyone who donates a particular amount of money. Here, autonomy is maximized, is as far as the creator can choose which rewards are awarded at which levels of donation. However, over time, this might lead to the creators finding themselves spending more time than initially expected making sure that donors get their just rewards. Making a donation is, in a sense, to enter into a contract, and it is up to the creators to live up to their part of the bargain. The freedom of the present is bound by autonomy expressed in the past. (Whether this is a productive relationship between creators and donors, or an inescapable iron cage where next month's rent depends on cranking out yet another unit, is always a contextual question.)
The middle strategy is, of course, a combination of the two. A degree of freedom is maintained, but if donations reach a certain level, something will happen. This something, while it is not a reward or contract in the same way as we saw above, is still a promise, and as such brings with it the obligation to fulfill it. (If nothing else, it looks - and sounds - bad if the audio recording equipment has not been upgraded for months and months after reaching the goal.)
I should stress that there is nothing inherently wrong with aiming for either autonomy or freedom in these matters. The point of this wall of text is not to say that you should do either instead of the other. Rather, the point - the thesis, as it were - is that you ought to make an informed choice when you create a Patreon page, and write it in such a way that you can live with who you potentially become. Giving lots of rewards is labor-intensive, but it is also an efficient strategy to get those donations to happen. Conversely, you might find that your creative efforts are hampered by the amount of extra effort you have accidentally committed yourself to. It all depends on who you are and what you are about.
Seen in this light, we are rapidly approaching an answer to the question of what Patreon is and how it affects its users. Moreover, we are able to ask new and interesting questions with regards to the ethos/ethics of online donation services. Given that Patreon users are free to define themselves and what they do (and for how much money this will be done), the tension between freedom and autonomy becomes front and center. Having a Patreon page becomes not only a way of asking for money, it also becomes an act of self-definition: this is who I am and what I do.
So. Donate to my Patreon, maybe?
Wednesday, February 7, 2018
Avoiding the problem head on
Valentine's Day approaches, and with it, a new blog project. This one will be the tenth iteration, and thus I wanted to do something special. My usual approach is to figure out a concept and use it as a template for new ideas - Relationship Statues is an example of that. This means I do not have a great deal of verbiage at hand on day one, and ever so gradually figure out what kind of posts to write. It's discovery and exploration as much as anything. This time, however, it is different.
This time, the new blog has a definite beginning, middle and end. And - more importantly - it is frontloaded to a never before seen degree.
This has me worried.
The funny thing about being worried is that there are different kinds of worry. There is the worry that the world will end, an all-consuming paralyzing worry. Or the worry that some dangerous element in one's immediate presence will spring into action, like a tiger. Or the worry that some elaborately planned course of events will fail to occur, leaving you in an awkward position (or botching that job interview). Or the worry that some unforeseen aspect will reveal itself and cause all previous plans to become obsolete - the factory closes, the application is rejected, the price goes up instead of down. Tangible, concrete worries about very specific things.
And then, there is my worry. I worry about capitalization.
That's right. Should I spell certain words beginning with an uppercase or lowercase letter? This is something I worry about.
There are two ways to understand this worry. One is to take it at its word, and face it head on. In the grand scheme of things, there are arguments for either case, and the matter can be resolved by simply picking one course of action and sticking with it. It might raise an eyebrow, but since it is consistent, it won't be a big deal. The worry is about a concrete problem that can be solved.
The other way to understand this worry is as a symptom. What I'm really worried about is not actually the capitalization (although having it solved would be a minor step forward). It is, however, a convenient thing to be worried about - thinking about it means not having to confront other things that are also worrying, but more indirect and difficult to pin down. If it was not this, the generalized worry would find some other minute aspect to zoom in on and fuss about.
This goes not only for this project, but for everything. Sometimes, you are worried about some minor detail because you genuinely do not know if it's this way or that, and the problem will go away if a solution comes along. At other times, the problem is not the problem, and the solution is to zoom out and take stock of the larger picture.
The difficulty is telling which is which, and when. -
This time, the new blog has a definite beginning, middle and end. And - more importantly - it is frontloaded to a never before seen degree.
This has me worried.
The funny thing about being worried is that there are different kinds of worry. There is the worry that the world will end, an all-consuming paralyzing worry. Or the worry that some dangerous element in one's immediate presence will spring into action, like a tiger. Or the worry that some elaborately planned course of events will fail to occur, leaving you in an awkward position (or botching that job interview). Or the worry that some unforeseen aspect will reveal itself and cause all previous plans to become obsolete - the factory closes, the application is rejected, the price goes up instead of down. Tangible, concrete worries about very specific things.
And then, there is my worry. I worry about capitalization.
That's right. Should I spell certain words beginning with an uppercase or lowercase letter? This is something I worry about.
There are two ways to understand this worry. One is to take it at its word, and face it head on. In the grand scheme of things, there are arguments for either case, and the matter can be resolved by simply picking one course of action and sticking with it. It might raise an eyebrow, but since it is consistent, it won't be a big deal. The worry is about a concrete problem that can be solved.
The other way to understand this worry is as a symptom. What I'm really worried about is not actually the capitalization (although having it solved would be a minor step forward). It is, however, a convenient thing to be worried about - thinking about it means not having to confront other things that are also worrying, but more indirect and difficult to pin down. If it was not this, the generalized worry would find some other minute aspect to zoom in on and fuss about.
This goes not only for this project, but for everything. Sometimes, you are worried about some minor detail because you genuinely do not know if it's this way or that, and the problem will go away if a solution comes along. At other times, the problem is not the problem, and the solution is to zoom out and take stock of the larger picture.
The difficulty is telling which is which, and when. -
Sunday, January 14, 2018
It's not the end of the world, but we can see it from here
The world did not end yesterday.
This statement has two qualities. One is that it is universally true for literally every situation you will find yourself in, a precondition for situations being that yesterday led to the present. The second, and more immediately pressing quality, is that it is related to a very specific event.
If you are reading in the future, then here is the context: yesterday, an alarm went off in Hawaii, warning about an imminent ballistic strike. Which is a technical way of saying that the nukes are coming, and they are coming this way. As you might imagine, this caused quite a bit of emotional anxiety for everyone involved. The fact that the whole ordeal happened because someone pressed the wrong button (I have been given impression that this is the literal truth, rather than mere evocative language) - did not help.
In fact, very few things help when the world is about to end. That is kind of the point of the world ending.
This is a very immediate situation to find oneself in. Everything becomes irrelevant, and one singular question becomes the totality of all possible lines of thinking: what do you do? Nothing matters any more, and thus the only thing that matters is what you do. Other questions, such as "what would others think?" "would this look good on my CV?" "does this affect my credit rating?" "can I really afford it in the long run?" - are swept away, and you are left with the immediacy of choice: do or do not, do this or do that. The long term is gone, this is your moment to define yourself in terms of your own. For the duration, you are the most important thing in your reality. You decide. What do you do?
This immediacy is both terrifying and, in a perverse way, liberating. The word that most perfectly summarizes the situation is the old version of "awesome": to be struck to one's very core with awe in the presence of some overwhelming factor which quite literally is larger than anything one has ever encountered before. It is the kind of experience that leaves you mouth agape and your mind repeating: everything I knew was wrong. Nothing makes sense any more, and because of that, the multitude of considerations that permeate everyday life melts away. Nothing makes sense, and the only thing that is of any importance whatsoever is:
What do you do?
Fortunately, the news about the world ending happened to be greatly exaggerated. We are still here to talk about it, and to try to get a grip on what this all means. Most, I suspect, will see it as just another news item among many, and not think too closely about it; there are still plenty of everyday chores to be done, and the non-ending of the world means they will not do themselves. Life goes on, with ruthless indifference, and this confronts us with a single, even more pressing question:
What do we do now?
This statement has two qualities. One is that it is universally true for literally every situation you will find yourself in, a precondition for situations being that yesterday led to the present. The second, and more immediately pressing quality, is that it is related to a very specific event.
If you are reading in the future, then here is the context: yesterday, an alarm went off in Hawaii, warning about an imminent ballistic strike. Which is a technical way of saying that the nukes are coming, and they are coming this way. As you might imagine, this caused quite a bit of emotional anxiety for everyone involved. The fact that the whole ordeal happened because someone pressed the wrong button (I have been given impression that this is the literal truth, rather than mere evocative language) - did not help.
In fact, very few things help when the world is about to end. That is kind of the point of the world ending.
This is a very immediate situation to find oneself in. Everything becomes irrelevant, and one singular question becomes the totality of all possible lines of thinking: what do you do? Nothing matters any more, and thus the only thing that matters is what you do. Other questions, such as "what would others think?" "would this look good on my CV?" "does this affect my credit rating?" "can I really afford it in the long run?" - are swept away, and you are left with the immediacy of choice: do or do not, do this or do that. The long term is gone, this is your moment to define yourself in terms of your own. For the duration, you are the most important thing in your reality. You decide. What do you do?
This immediacy is both terrifying and, in a perverse way, liberating. The word that most perfectly summarizes the situation is the old version of "awesome": to be struck to one's very core with awe in the presence of some overwhelming factor which quite literally is larger than anything one has ever encountered before. It is the kind of experience that leaves you mouth agape and your mind repeating: everything I knew was wrong. Nothing makes sense any more, and because of that, the multitude of considerations that permeate everyday life melts away. Nothing makes sense, and the only thing that is of any importance whatsoever is:
What do you do?
Fortunately, the news about the world ending happened to be greatly exaggerated. We are still here to talk about it, and to try to get a grip on what this all means. Most, I suspect, will see it as just another news item among many, and not think too closely about it; there are still plenty of everyday chores to be done, and the non-ending of the world means they will not do themselves. Life goes on, with ruthless indifference, and this confronts us with a single, even more pressing question:
What do we do now?
Saturday, January 6, 2018
Backstage sociology
This semester, we talked a lot about Goffman's concepts of frontstage and backstage. One of the things that struck me about it is that it is, as so many other concepts, fractal. You can apply it to just about any level, and then move either upwards or downwards, finding roughly the same processes going on. The scales differ, but the process remains the same.
These words have the advantage of meaning what you think they mean. Backstage is the social space behind the stage, where the last-minute rehearsals, costume changes and informal banter takes place, while frontstage is, well, on stage. The two spaces have different social dynamics, and things that are proper in one is improper in the other, and vice versa. While the play is on, only the actors who are supposed to be on stage are on stage, and they have very defined roles to play; the show must go on. Only when they have retreated backstage can the actors let their guard down, stop acting and - quite unceremoniously - collapse into the post-performance heaps they really are.
The audience members, too, have roles to fill whilst the show is on. The fact that these roles mostly consist of sitting and watching makes them comparatively easy to play; this does not, however, take away from the fact that things get very strange very fast if audience members suddenly decide to join in on the action. Everyone present have roles to fill, and most everyone present know these roles implicitly.
A non-theatrical example is a restaurant. Out among the tables, things are quiet and posh, with hushed conversations taking place among the dining guests, a reprieve from the hustle and bustle of everyday life. In the kitchen, however, the hustle and bustle is in full swing, with yelling, fast-paced motions and a stress level that is through the roof. The difference between frontstage and backstage is not subtle.
The fact that these two states of things happen in close proximity to each other means that there have to be boundaries between them. Often enough, these boundaries are subtle until you try to cross them. A restaurant guest is usually not allowed into the kitchen, and quickly escorted out should they somehow stumble into it. Shoppers are allowed to browse the store area, but any attempt to enter the back rooms will be ever so efficiently discouraged. If you do not have a keycard, you are not allowed into the office building. At concerts, only those with backstage passes are allowed into these mystical spaces.
Most spaces can be analyzed using these concepts. They are very versatile in this regard.
They are also fractal. Individuals act differently when they are frontstage (often quite literally meaning that they are not alone) than when they are backstage, and the boundaries between these states allow very few persons access. A small group (beginning at two persons) can similarly act differently when in a frontstage setting than when alone, with similar boundaries to entry. A large group (a theatre production, for instance) can project a particular image frontstage, while having very different dynamics backstage. And so on, scaling up as much as need be. (I suspect the discovery of alien life will have interesting implications in this regard.)
The only thing needed to use these concepts is an impulse to apply them to concrete situations. Upon reading this, you now have this impulse.
Have fun.
These words have the advantage of meaning what you think they mean. Backstage is the social space behind the stage, where the last-minute rehearsals, costume changes and informal banter takes place, while frontstage is, well, on stage. The two spaces have different social dynamics, and things that are proper in one is improper in the other, and vice versa. While the play is on, only the actors who are supposed to be on stage are on stage, and they have very defined roles to play; the show must go on. Only when they have retreated backstage can the actors let their guard down, stop acting and - quite unceremoniously - collapse into the post-performance heaps they really are.
The audience members, too, have roles to fill whilst the show is on. The fact that these roles mostly consist of sitting and watching makes them comparatively easy to play; this does not, however, take away from the fact that things get very strange very fast if audience members suddenly decide to join in on the action. Everyone present have roles to fill, and most everyone present know these roles implicitly.
A non-theatrical example is a restaurant. Out among the tables, things are quiet and posh, with hushed conversations taking place among the dining guests, a reprieve from the hustle and bustle of everyday life. In the kitchen, however, the hustle and bustle is in full swing, with yelling, fast-paced motions and a stress level that is through the roof. The difference between frontstage and backstage is not subtle.
The fact that these two states of things happen in close proximity to each other means that there have to be boundaries between them. Often enough, these boundaries are subtle until you try to cross them. A restaurant guest is usually not allowed into the kitchen, and quickly escorted out should they somehow stumble into it. Shoppers are allowed to browse the store area, but any attempt to enter the back rooms will be ever so efficiently discouraged. If you do not have a keycard, you are not allowed into the office building. At concerts, only those with backstage passes are allowed into these mystical spaces.
Most spaces can be analyzed using these concepts. They are very versatile in this regard.
They are also fractal. Individuals act differently when they are frontstage (often quite literally meaning that they are not alone) than when they are backstage, and the boundaries between these states allow very few persons access. A small group (beginning at two persons) can similarly act differently when in a frontstage setting than when alone, with similar boundaries to entry. A large group (a theatre production, for instance) can project a particular image frontstage, while having very different dynamics backstage. And so on, scaling up as much as need be. (I suspect the discovery of alien life will have interesting implications in this regard.)
The only thing needed to use these concepts is an impulse to apply them to concrete situations. Upon reading this, you now have this impulse.
Have fun.
Subscribe to:
Posts (Atom)