Thursday, May 2, 2019

What do sociologists know anyway

[In this master paper, I pull the rather unorthodox move of reading Foucault, Bourdieu and Habermas through the lens of the literary critic Wayne Booth. You can read it as a comparison of epistemologies, as an introduction to each author, or as a strange progression of paragraphs which keeps happening until it suddenly stops. Either way, our sociology department is rad and cool for letting me get away with this. You can read a truncated, slightly more to the point version of the Booth section here.]


Wayne Booth of Critical Understanding (1979) is all about reconciling different views with each other. Through wit and extended use of examples, he walks the reader through different ways of working out the implications of a multitude of perspectives and what it means to have a commitment to theoretical pluralism. As such, Critical Understanding transcends its humble pretense of merely being a book on literary criticism, and comfortably strides into the epistemic category of being a book on the methodology of science. In the following chapter, I shall endeavor to demonstrate how and why that is, and how it can be used to understand contradictory and complementary sociological theories without accidentally confounding or confusing them.

Booth identifies six archetypical attitudes to the multitude of approaches with regards to criticism. That is to say, six ideal types (to impose Weberian terminology) which can be used to describe what happens when the world is larger than human attempts to understand and capture it in words and theoretical models. Seeing as this is roughly analogous to what sociologists do in their craft, I reckon each of these types will generate nods of recognition along the way.

The first ideal type Booth identifies is an appreciation of the diversity of epistemic approaches, and an implicit (sometimes explicit) hope that frequent clashes between these points of view will garner insight into both the object of study and the models used to study them. Different paradigms (to impose Kuhn’s terminology) will strive to conduct the task of understanding the world their own way according to their own internal rules, and through a process of comparing and contrasting we as critics (and social scientists) will come to a better understanding of what is going on and how to move forward. The debate itself is sufficient to reconcile the plethora of perspectives.

Booth rebukes this ideal type by pointing out that in actual practice, critics rarely respond well to other critics making critical interventions and remarks on their work. More often than not, the charge of misunderstanding is leveled in the direction of those who voice dissenting opinions (be they ever so theoretically grounded), and the only result of the interchange is both sides doubling down on their own paradigms, rather than reaching some kind of elevated mutual understanding. Although Booth is keen to point out that the process of peer review is essential, on its own it is by no means sufficient to the task of reaching a state of paradigmatic reconciliation and cooperation. At best, it amounts to merely hoping for the best and soldiering on with whatever task one is preoccupied with.

The second ideal type is an attempt to reconcile different paradigms by resolving the semantic differences between them, and thus make them linguistically comparable and interoperable with each other. The goal of such an attempt is to show how each and every theory relate to each other (and the inherent implications of these relations), and to reduce the amount of conflict that arises from merely semantic differences in theories that roughly describe the same entities. By thusly creating a shared linguistic platform from which to operate, critics (and social scientists) can proceed using either of the translated paradigms with confidence.

An inherent assumption of such an approach is that the main difference between paradigms is semantic, and that clearing up any confusion caused by differences in vocabulary will sort out any other important differences too along the way. Booth does not mention the differences between quantitative and qualitative methodologies, but it is easy to visualize that not even the most semantically clear communication between adherents of these standpoints will solve disagreements in and of itself. There is more at stake than merely not understanding one another clearly enough; some differences persist even once semantic misunderstandings are sorted out. Moreover, Booth concludes, any given project to unify the multitude of perspectives would itself constitute another perspective, with the implication that it too would have to be semantically reconciled down the road.[1]

A third ideal type is monism, the thought that there is a single correct point of view, and that we (or, as the case may be, I) have it. Booth differentiates between two kinds of monists. The first is a person who simply does not know enough to realize there are other points of view, and proceeds under the assumption that their own reference point is the correct one (by default). The second, more interesting kind, are theorists that set out to create universal theories that can account for every aspect of every thing in every way. One theory, one truth.

As you might imagine, this approach does not solve the problem of reconciling different paradigms as much as it proposes to do away with it altogether. If others are wrong, then taking their viewpoints into account would be a something of a misguided effort (e.g. quantitative and qualitative methodologies). As a critical endeavor, Booth maintains, this approach tends to settle with showing how others fail to approach the truth in a correct way, without engaging with the more complex aspects of these other theories in a constructive or useful manner. Monists are, in every sense of the word, uncritical.

A fourth ideal type is skepticism, a critical mindset dedicated to investigating the faults and merits of any paradigm it encounters. A skeptic differs from a monist in that the former does not necessarily require the overarching theory characterizing the latter; a skeptic might very well be skeptical in general, an equal-opportunity critic. In a sense, all social scientists are skeptics, in that statements must cohere with some minimum of plausibility before being entertained as potentially true. In short, taking things at face value is not the skeptic’s way, and a claim has to be substantiated with both evidence and logic to be believed.

Booth, writing as he did in 1979, did not have the advantage of being able to refer back to postmodernism as the process of skepticism run amok[2]. Nevertheless, he describes the drawbacks of the skeptical mindset in a way that is eerily reminiscent of the (sometimes unfairly ascribed) postmodernist tendency to reject everything out of hand. While there is method to the postmodernist madness, for our purposes it suffices to say that even skepticism is not an ideal platform upon which to build a mutual understanding between practitioners.

A fifth ideal type is eclecticism, the gathering of many different aspects from a variety of sources into a new whole. The overall aim of this approach is to take the ‘good parts’ from different theories and, through a process of integration, end up with a somewhat unified course of action, allowing the critic to proceed with a confidence informed by many different perspectives. Eclectics are open to new ideas and eager to put them to use; other points of view are not wrong, merely different.

Booth, as you might suspect, has reservations against this approach as well. For one thing, it might devolve into an uncritical acceptance of disparate and contradictory ideas, a hodgepodge of a little bit of this and a little bit of that. For another, it might merely be a monism in disguise, accepting some aspect of a theory while denouncing everything else as wrong. The initial premise of eclecticism – to be widely read and open to new impressions – is easy to get behind in theory, but in practice it tends to lead to scattered, undisciplined and (paradoxically enough) dogmatic thinking, albeit without the advantage of being aware of its organizing principle.

The sixth ideal type, finally, is what Booth calls methodological pluralism. This approach consists of taking in two or more paradigms on their own terms, and applying them on real world phenomena according to their own internal rules. It does not try to resolve the contradictions between paradigms, but rather attempts to keep them both in mind at the same time. If one theory says A, and the other theory not-A, this does not constitute a logical impossibility that has to be resolved one way or another; rather, it is indicative that applying either method in situations pertaining to A will have different implications that should be taken into account when choosing the correct method. The plurality is not an attempt to create a new monism, but a pragmatic admission that the world is larger than our theories, and thus necessitates more than one[3].

At this point, a reader might interject that this is all very general, and that we are no closer towards having a positive framework regarding how to deal with the presence of multiple points of view. To be sure, being able to entertain more than theoretical outlook as plausible and possible to apply makes sense, but surely we did not need this many words to arrive at this conclusion. How can we make things concrete? Booth, fortunately, proceeds by making things concrete, by using three contemporary critics as examples of three ways of applying methodological pluralism. In the interest of expediency, I will omit the particulars regarding the three critics (Ronald Crane, Kenneth Burke and M. H. Abrams) and focus on their methodological approaches.

Crane, according to Booth, subscribes to a methodological heuristic of first defining the problem and then applying the best possible method of solving the problem as defined. This heuristic has the initial advantage of being intuitive and straightforward – out of all the possible methodological tools available, the best one is chosen for the defined task. Define the problem, select the appropriate theoretical tools, then set out to solve it. Preferably, the theories chosen should exhibit coherence, correspondence and comprehensiveness with regards to the object of study (be it poem or social phenomena). As simple and unproblematic as this approach might seem from the point of view of the person doing the defining, choosing and solving, it is not as straightforward when observed by an external observer. Booth primarily focuses on the aspect that Crane explicitly defines his work in a particular way, and his critics lambasting him for not doing the work of solving another problem defined another way, and the confusion that results from this state of things. For our purposes, we can go further and point out that the same confusion reigns with regards to the choice of means as well as the execution of the chosen solution. Merely making what we perceive as the correct choice of theory and method is insufficient; what this approach offers in terms of clarity of action it lacks in terms of intersubjective accessibility.

Burke, again according to Booth, offers up another methodological wrinkle by pointing out that the theory chosen by necessity brings with it its own set of definitions and assumptions. Indeed, some problems might only be definable from within certain theories, meaning that defining a particular problem in a particular way overdetermines the choice of that theory. This is an inversion of Crane’s approach, in that Crane began by defining the problem and then proceeded by choosing the appropriate theoretical means to solve it. We thus end up in a chicken-and-egg situation, where the problem to be solved is only ever made possible to define by using the means to solve it. Thus, Crane’s described process of defining a problem and then choosing the best possible means of solving it becomes a mischaracterization of what actually happens. Burke is not overly concerned with the epistemic wrinkles of this inversion, and settles on concluding that this is the way theories work; adhering to any given theory provides a “terministic screen” through which to view and define the world (and the problems to be solved in it). Thus, being in command of a multitude of theoretical perspectives allows for the formulation of a wider range of problems to be solved through the methods at hand.

Abrams, still according to Booth, performs yet another inversion of methodological propriety. In his explication of the poetry of Wordsworth, Abrams draws upon a multitude of historical and literary sources to show how this influence can be seen in this way, and another in another and so on, in such an intensely close way that the whole becomes greater than the sum of its parts. In one sense, Abrams’ project is an exegesis of a particularly dense poem; in another sense, it is an expose of the entire historical moment commonly referred to as Romanticism. By focusing so intently on the particular, something general is uncovered. But – and this is the methodological wrinkle – Abrams did not follow a prescribed method in his efforts, nor did he try to solve a defined problem. Rather, he was guided by a sense of where the historical linkages were to be found and a deep familiarity with the associations Romantic poets would have made at the time. Abrams’ work was not the result of methodologically driven processes, yet by virtue of the sheer performance on display throughout the work, it became a cornerstone upon which further methodologically driven critics and researchers have to rely.

The task at hand for Booth is to develop a working model of methodological pluralism that takes each of these inversions into account without succumbing into monism, eclecticism or radical skepticism. In the simplest possible framing, Booth’s project is to give readers the tools they need to keep two or more frameworks in mind at the same time, without falling for the temptation to call either of them truer than the other. In a sense, it is a return to Aristotle’s maxim that what distinguishes an educated mind is the ability to entertain an idea without accepting it, with the added critical wrinkle that ideas are not just propositional statements, but whole worldviews whose implications have to be thought through.


With this, it is time to let go of Booth and turn to Foucault. Foucault (2003) famously wrote on the nature of knowledge and power, intertwining them in such a way as to say that one is readily translatable to the other. This has been the source of a great deal of confusion, as well as an equal measure of conceptual clarity. It is my endeavor to begin this section by discussing the clarity, and then turn to the confusion; readers might even now see where this is headed, but there is an order to these things.

In Discipline and Punish, Foucault outlines the mechanism whereby individuation is a paradoxical result of the standardization of expectations. An example he mobilizes is the disciplined soldier, whose ability to move in perfect formation and keep up an immaculate appearance is a measure of his capacity as an individual agent. When seen from afar, a regiment in formation seems to consist of a number of interchangeable parts, where the particular placement of an individual is incidental to the performance of the unit as a whole. When inspected up close, however, any deviations from the expected performance can be noted as marks of individuality. More often than not, they are marked as reasons for punishment – a ruffled uniform, unpolished boots, movements that are ever so slightly out of sync with the other troops’. Foucault zooms in on these small differences, and remarks that it is here individuality is constructed. Not in the sense that disobedience is the root cause of individuality, but rather that individuality is constituted by the subtle variations in the degree to which and given person coheres to the standardized expectations.

Another example of the same process is modern education. Here, too, standardized expectations are at play, and pupils are graded on how well they manage to live up to them. At the beginning of the educational process, the cohort is an undifferentiated mass of people whose precise characteristics have yet to be determined. At the end of that same process, a myriad of evaluations have been made and documented about these characteristics, and from these documents the pupils emerge as individuals. Some have shown proficiency in one area, and are graded thusly; others have shown affinity for other subjects, and are graded accordingly. Based on the picture painted through the documentation created through the educational process, the life trajectories of the individuated bodies can take different courses. Some might be presented in such a favorable light as to be railroaded towards a life of academia, while others find themselves struggling to convince anyone that they are legitimate participants of polite discourse. The process of subtle differentiations ends up resulting in differences that are anything but subtle.

It is in this way knowledge amounts to power. Not in the sense that individual pupils can leverage themselves into a more powerful life status by learning well in school (although this is a popular narrative), but in the sense that the fine-grained practices of evaluation and documentation present in the school system as a whole grants it the power to determine the life trajectories of a great number of those passing through it. The knowledge produced about individuals by documenting their every move becomes the very thing that defines them as individuals – the individual does not exist prior to having undergone the process of being evaluated with regards to the standardized expectations.

Foucault then shifts his analytical approach from the particulars of barracks and schools, and generalizes this principle of disciplined individuation to society as a whole. At any given place in and given moment in time, there are any number of standardized expectations at play through which individuals are being evaluated and judged. Some of them are written down and formalized (as in the many instances of bureaucratic documentation that permeates modern life), but a great many of them are left unsaid, implicit in the general interplay of individuals (such as social norms which dictate who is ‘hip’ and who is not). Rather than give an account of each and every particular instance of this process, Foucault generalizes it and describes the latent standardized expectations inherent in social interactions as “the discourse”. This shift, while understandable from the point of view of making a point and finally getting a book ready for publication, is also the source of a non-trivial amount of confusion. We shall return to this confusion after a brief discussion on Foucault’s shift from the specific to the general.

If we view Foucault’s move from the specific to the general through Booth’s typography of theoretical approaches, we might notice that it more closely resembles Abrams’ approach than anything else. Foucault did not set out to solve a defined problem using an explicitly described series of methodological steps. Rather, he gave thick descriptions of a series of [arguably early] modern practices in such a way that, upon having read them, readers can not but nod to themselves in recognition. This would, in one sense, make it bad science, an under-documented investigation into the social processes of documentation and differentiation, where inferences are made without adequate material support for moving from the particular to the general. Yet, as with Abrams, the proof of the pudding is in the eating. At some point during the reading, the sheer amassed volume of details pointing in the same direction becomes overwhelming; the presentation reaches a critical mass where methodological objections are somewhat beside the point.

This does present something of a methodological problem for those who want to apply Foucault, however. They can not readily proceed through mere mimesis, by replicating his feat in the same manner he performed it. Or, rather, they could, but the effort involved would be greater than an average academic is likely to have available in their everyday practices. A more manageable way to go about it would be to follow the Burkean way of applying Foucault as a terministic screen for the purposes of defining the problem to be investigated, and then proceed by following Crane’s example in choosing the best method for the problem as defined. The fact that Foucault pulled off being Foucault does not mean this possibility is democratically or evenly distributed, and it behooves us to temper our ambitions accordingly.

To return to the confusion mentioned above, we might gainfully apply Booth’s six ideal types of how to handle theories. What leaps to mind intuitively is that Foucault’s insistence of referring to “the discourse” can be – and has been – read as a commitment to radical epistemic warfare, the first of Booth’s types. “The discourse”, understood as merely words without referent in the material, is a free-floating social construct from which any and all social behavior is derived, meaning that anything is possible and everything is permitted. Unmoored from the material, the discourse offers a reading in which anything goes, as long as it can be expressed with sufficient self-confidence. In the postmodern condition, everyone gets a shot at pulling off a Foucault.

In hindsight, this has not panned out. There are any number of books with the words “postmodern” in their titles which can scarcely claim relevancy today, despite taking the free-floating discursive premise and running with it. In part, this is due to the fact that they rely on a misreading of Foucault, and in part because of the untenability of their methodological premise; when everything is relative and all truth-claims equally valid, it makes no sense to read them rather than anything else. Donald Duck beats them by virtue of at least being amusing to read.

A more rigorous approach would assume the fourth type of attitude, that of radical skepticism. If the categories we use to differentiate people from one another are socially constructed, then it makes sense to critically examine and critique these categories and their social effects. Up to a point, this approach manages to accomplish what it sets out to do – it problematizes the systematic use of certain labels to disenfranchise certain groups from participating in public discourse (e.g. “hysterical” women), and increases awareness that words matter in a concrete, material way. The challenge is to not fall into a knee-jerk habit of dismissing every categorization out of hand, or to become so sensitized to every nuance of every word that practical communication grinds to a halt. While it is true that Foucault describes the present as the sediment of the past (this is the premise of the Archaeology of Knowledge), one has to be strategic when choosing when to engage in archaeological explication and when to merely say things with the plainness the situation requires them to be said.

Those who wish to engage Foucault in the mode of methodological pluralism have to confront the fact that it requires at least one hard swallow. By this, I mean that given that Foucault’s main mode of grounding his theory is demonstration, it follows that readers either have to accept or reject what they have been shown along the way, despite the lack of methodological progression from first principles to full theory. If this can be accepted, then the theory can be applied in a fruitful manner. If it can not be accepted, then Foucault merely comes off as making one unsubstantiated claim after another, somehow managing to capture the attention of a great number of people. The methodological pluralist would have to take this into account when considering the applicability of Foucault’s theories, and – should it come to it – bite the bullet.


At first glance, Bourdieu (2012|1990) tackles what might seem to be the same questions as Foucault. Foucault tackled the question of power and how it operates; Bourdieu tackles the question of how it comes that a great many individuals, all unique in their own ways, nevertheless happen to act in the same manner and seem to be formed from the same mold. The path each respective author has taken to getting to these questions differ radically, however, and as we have seen the way to arriving at a particular question is as important as the questions themselves. The main difference between Foucault and Bourdieu being that the latter is explicitly defining himself as a sociologist, while the former does not.

Bourdieu, the sociologist, tackled the question of how structures and individuals can coexist without one overdetermining the other. Or, to phrase it another way, how to bridge the gap between methodological collectivism (Durkheim) and methodological individualism (Weber). It is true that humans in groups exhibit predictable tendencies that can be modelled without any particular knowledge of any of the individuals involved. It is also true that biographical facts sometimes trump these predictions and propel individuals into life trajectories that are not accounted for by collectivist models (the life trajectory of Bourdieu himself is a much-noted example of this). Having theories for both collective and individual processes provides a bigger understanding of what is going on, but it also leaves something of a gap between one and the other. Bourdieu’s project was to bridge that gap, without reducing either category into the other.

The main concept Bourdieu uses to bridge the gap is that of habitus. A habitus is a set of generative dispositions, tendencies and propensities that frame the range of action and understanding in/of a particular individual. These are inculcated into the individual through a combination of socialization, ideology and lived experience, so as to form a totality of their being in the world. No two individuals are alike, and they might differ in a great many respects, whilst at the same time share an overall habitus that propels them into similar life trajectories. The concept encompasses both these similarities and these differences, without overdetermining either.

It is important to note that a habitus is not akin to an algorithm, which produces the same results every time when given identical input. The “generative” of “generative dispositions” connotes a probability of acting along certain lines, but leaves the specifics as to how these probabilities play out empty. This allows for individual agency, whilst at the same time giving an account for why, given a sufficiently large sample size, certain trends and patterns will emerge along class lines. Individuals from the working classes will, overall, act differently when encountering a work of abstract art than individuals from the upper classes. This does not preclude the possibility of working class individuals responding with a deep appreciation of the artwork at hand, but it does maintain that, statistically speaking, this would be an outlier rather than the norm. This outlook allows sociologists to speak both of individual biographies and social structures at the same time, without engaging in complex translations between individualist and collectivist theories.

The formation of habituses is as much material as anything else. The formation of a working class habitus is as much determined by the importance of work and the workplace in the lived experience, as it is by the conditions outside and surrounding the work itself. A vulgar example is the stereotype that workers are too tired to read Hegel when they come home from work, and thus engage in less intellectually stimulating activities, which means they never engage their capacity for philosophical thought, which would explain why the working classes are less philosophically inclined than those further up the class structure. This example ascribes capacities to individuals of a group, and thus does not describe a habitus; rather, it is a stereotype. A better, more nuanced explanation of the same phenomenon is that those in the working class are not encouraged to engage in the habits of casually reading anything at all, much less Hegel in particular, and that (writ large) this means members of the working classes are less likely to know someone who can get them into the habit. The impulse to do something has to come from somewhere, and if this impulse is not present, then something else will most likely be done. Those are the probabilities.

The education of upper class children, by contrast, more than likely include trips to museums, the reading of important literary works and other excursions into high culture. Whether individuals retain any specific knowledge about particular works from these experiences is of lesser importance than the fact that they get accustomed to going to these places and engaging with these activities. What is ingrained is a set of habits, tendencies and memories associated with these activities, and thus also a higher likelihood of engaging with them at a future date. To be sure, there might be individuals who are utterly unmoved by any of the aspects of high culture, but as a class the propensity to revisit these sites of knowledge remains a permanent feature. An upper class habitus brings with it a familiarity with these things, if nothing else.

In a seminal work on education, Bourdieu and Passeron (1990) notes that the educational system has a distinctly middle class character. The teachers, who are by definition educated and in possession of a non-trivial amount of cultural capital, are more often than not belonging to the middle class themselves, and accustomed to socializing with others possessing a middle class habitus. The educational aims, too, tend to assume the possession of a middle class habitus, albeit implicitly and without framing it in terms of class. This results in a system that, from the word go, treats children belonging to the working and middle classes differently. The latter are already socialized in such a way as to know what is expected of them, while the former suddenly discover that things are not as they are at home, and that their success moving through the system is contingent on learning middle class mannerisms and habits. Failure to do so means academic death (which is to say, an exit from the system of education, either after completing the mandatory part, or earlier than that in extreme cases)[4]. Working class pupils will, to put it bluntly, have to learn twice as much as their middle class counterparts in the same amount of time. This is one of the ways in which class structures reproduce and perpetuate themselves.

If we turn back to Foucault, we see that there are similarities between their accounts of the process of education. We also see that they arrive at these seemingly similar conclusions through very different processes, and that it would be a mistake to assume that they can be easily translated back and forth between one another without losing important contextual and methodological assumptions along the way. Bourdieu’s concept of habitus, and the implicit assumptions with regards to structuring structures, is not at all the same as Foucault’s concept of the disciplining discourse, and the assumptions that go along with it. Class is the name of the name of the game for Bourdieu; for Foucault, not so much. To invoke Booth’s second archetype, these are not merely semantic differences which can be sorted out through careful linguistic analysis and exegesis. These are different conceptual universes that have to be understood on their own terms, lest the sum becomes smaller than the sum of its parts.

From this, it is easy to understand why Booth included eclecticism (the fifth archetype) as a non-optimal mode of critical understanding. While the theorists overlap in their subject matter, it would be a mistake to take a little bit of one and a little bit of the other and uncritically combine in a theoretical stew. For one thing, the different theories lend themselves to different methodological approaches (Bourdieu is very explicit about his methods, and in the Distinction (1984) I would argue he is explicit to a fault), which means careful preparation is required to ensure that any inquiry informed by both theories actually investigates what it claims to investigate. It is very easy to slip up and begin empirically exploring Foucault’s theory of habitus. While this approach would indeed manage be novel, it would not be informed by the same conceptual apparatus employed by other scientists in the field.


When it comes to methodological pluralism, Habermas (1984, 1987) is it. In Communicative Action, he summarizes in exhaustive detail[5] the theories of Durkheim, Weber, Parsons, Mead, Adorno, Lukacs and a number of other social theorists, in such a way that it is clear where each respective theory ends and Habermas’ continuation begin. To say that Habermas is firmly grounded in the theories of the sociological field (pun intended) would be to understate the case – I reckon many a sociologist read Habermas and only then firmly grasped what the aforementioned theorists were about. Clarity at length is not just a stylistic achievement, however – it is a cornerstone of his theory of communicative action.

Like Bourdieu, Habermas is primarily focused on bridging the gap between individual and structures. The former is expanded into the concept of lifeworlds, a concept imbued with the full philosophical force of phenomenology, from Schutz and Husserl onward. The latter is expanded into the concept of systems, derived both from the then emerging systems theory (including and beyond Parsons) and from Weber’s looming iron cage of rationality. Both of these concepts deserve elaboration in turn.

A lifeworld is the material and mental circumstance within which an instance of human subjectivity finds itself. This encompasses everything encountered by the subject in question, from the most private of experiences to the most public of actions. This is at once both very general and very specific. On the one hand, there are a great many experiences to be had between birth and death, and filing them all under one singular rubric is ever so slightly handwavy. On the other hand, Habermas needs to differentiate personal experience from the myriad of objective societal and technological processes that take place in the world, which affect said personal experience but do not stem from it. The wide/narrow definition of lifeworld performs this function and manages to preserve the human experience without reducing the scope of the theory to it; it is a move that encompasses processes on the micro level whilst also acknowledging trends on a macro level. It also avoids the reverse tendency: to fully and irrevocably incorporate lived experience as aspects of larger systems.

A system is, as might have been inferred from the above paragraph, the supra-individual processes that affect and shape social reality; among them, capitalism is the most general. Science, politics, art, technology – modernity consists of and is constructed by an innumerable amount of systems working in concert and parallel. What differentiates modernity from earlier historical periods is that these systems have grown more specialized, autonomous and powerful at a rapid pace (Bauman [1999], drawing on Hans Jonas [1994], would contend that they have outgrown humanity’s capacity to manage them). As with the concept of lifeworld, the wide/narrow definition serves an analytical function – it allows sociologists to talk about macro level tendencies without having to seek recourse in micro level counterparts. 

This is not to say that lifeworlds and systems do not interact. On the contrary, a substantial portion of Communicative Action is preoccupied with spelling out in exhaustive detail just how such interactions take place on a theoretical level, and the implications of each such interaction under various circumstances. In the interest of keeping things short, I will now speedrun to the most famous of such interactions, which Habermas refers to as the tendency of systems to colonize lifeworlds.

The easiest way to characterize such a colonization is as an intrusion. In keeping with the discussions on the other authors, it would be prudent with an educational example. A student is embedded in a lifeworld, with a family, friends, social relations and the whole phenomenological package. Then, she is given an assignment of unusually large proportions, and have to devote more time than usual to completing it. Thus, she has to take time from her other lifeworld activities (socializing, attending family gatherings, etc) in order to ensure that the assignment is completed within the allotted time frame. Somehow, she will have to navigate the demands from both spheres.  More often than not, the logic of the (educational) system overrules the logic of the lifeworld. Not only in this one particular instance, but also in the way our imaginary student over time becomes more socialized into the specialized mindset that characterizes whatever field she studies. Upon completing her education, she will be a different person both compared to when she began, and compared to her peers; the logic of the system has colonized her lifeworld.

To be sure, this process is not an irresistible, one-sided endeavor. Rather, it is to be understood as an endless series of negotiations instantiated within the context of an individual and her lifeworld, where strategies to resist (or assist) the process can be leveraged with various degrees of success. The point, for Habermas, is to draw attention to the myriad of such situations that prevail in modern societies, and the tendency of specialized logics stemming from particular systems to permeate and determine the shapes of particular lifeworlds. The logic of completing an education is but one such systemic presence; the capitalist demand to get a job another, the medical imperatives to adopt certain habits yet another, whatever ideological ideas have captured the political moment is yet another still.

The point of pointing out these tendencies is, as we alluded to in the beginning of this chapter, to support and enable clear communication. Being able to clearly identify the systemic demands put upon individuals allows them to clearly communicate about their circumstances, and possibly also to collectively formulate courses of action to deal with them. The most dramatic example of this would be, to invoke Marx, the working class transcending being a class-in-itself to become a class-for-itself, thus initiating the proletariat revolution (or, with a slightly higher degree of probability, the formation of unions and the initiation of collective bargaining). In less dramatic terms, it would help reduce interpersonal drama caused by systemic (dys)functions. Our imaginary student would be able to convey that university life demands a non-trivial amount of time, and her lifeworld peers would be able to make the counterpoint that there are other things in life than university assignments.

Here, we see a distinct similarity with Booth’s second archetype (that of semantic resolution). Ironically, this is also most frequent critique leveraged against Habermas in general and Communicative Action in particular. More often than is reasonable to count, it has been said that the ideal forms of communication formulated by Habermas are unrealistic, utopian and impossible to achieve. The irony is that Habermas would agree with these assertions, but insist that the attempt be made anyway. Even if perfectly undistorted communication is impossible, removing even one distortion would be an improvement. This ethos shines through not only in his argumentation, but also in his commitment to making sure each theoretical foundation is as transparent as possible. While Habermas does not manage to translate every social theory of individual and structural processes into one coherent, all-encompassing whole, his attempt has made it easier to navigate the theoretical landscape moving forward. Communicative Action is not a Rosetta Stone, but – and this is the critical question – why should it have to be?


Bauman, Z. (1999). Vi vantrivs I det postmoderna. Göteborg: Daidalos.
Booth, W. (1979). Critical understanding: the powers and limits of pluralism. Chicago: University of Chicago Press.
Bourdieu, P. & Passeron, J-C. (1990). Reproduction in education, society and culture. London: Sage.
Bourdieu, P. (1984). Distinction: a social critique of the judgment of taste. London: Routledge.
Bourdieu, P. (1990) The logic of practice. In Calhoun et al (ed) (2012): Contemporary sociological theory. Chichester: Wiley-Blackwell.
Foucault, M. (2003). Övervakning och straff. Lund: Arkiv.
Habermas, J. (1984). The theory of communicative action. Vol. 1, Reason and the rationalization of society. Boston: Beacon Press.
Habermas, J. (1987). The theory of communicative action. Vol. 2, Lifeworld and system: a critique of functionalist reason. Boston: Beacon Press.
Jonas, H. (1994). Ansvarets princip: utkast till en etik för den teknologiska civilisationen. Göteborg: Daidalos.
Munroe, Randall. (n.d). Standards.

[1] There is an XKCD comic that illustrates this phenomenon rather poignantly.
[2] The whole book, written as it is in the mid-1970s, is a fascinating insight into a world prior to the advent of explicitly postmodernist theory. It makes frequent references to Barthes, Derrida and other names associated with the postmodern moment, but these references take on the nature of gesturing towards individual authors rather than a more overall movement; reading between the lines, it is possible to glimpse what is ahead, but it has not yet come to pass.
[3] Booth discusses at length whether this constitutes a monism in and of itself. Reading between the lines, and in the context of his overall project, it is difficult not to get the impression he does this more as an elaborate and humorous poke at overly pedantic philosophers, rather than as a serious objection. The rhetorical structure of first proposing five ideal types, only to reject them in favor of a sixth type, supports this reading.
[4] It is interesting to note that Bourdieu and Passeron (1990) do not view academic death as a failure state. Rather, it is a statistical index of when certain populations end their education. The working classes tend to opt out of further education after the mandatory portions are completed, while the middle classes continue into the levels of secondary education. At tertiary levels, even the middle classes begin to opt out, populating the universities with a distinctly upper class crowd. With the recent expansions of higher level education, the contemporary class ratios at any given level are bound to be different now, but a systematic investigation of who succumbs to academic death would most likely find that the same tendencies persist then as now. Plus ça change, only a select few survive long enough to become professors.
[5] It amuses me that Calhoun et al (2012) chooses to focus on the equally exhaustive account Habermas gives of the communicative situation facing a workplace deciding to acquire a morning drink. Not only because it is a window into a different time with a different work culture, but also because it frames Habermas as a symbolic interactionist. This is an odd editorial choice to make, given his overall ambition to move beyond the merely interpersonal into a theory of struggle or dialectics between lifeworlds and systems.

Thursday, April 4, 2019

How to avoid graduating - a guide for PhD students


Not graduating is relatively easy in the early days of one's education. The student union provides a host of alternative activities which effectively crowds out all attempts at studies. At the PhD level, things become more difficult. The doctoral student will quickly discover that it is no longer socially acceptable to spend evenings at the union pub. He/she has to find other strategies for avoiding reaching the end point of student life, strategies which are both socially acceptable and compatible with his/her conscience. Fortunately, there are a number of such strategies which have been empirically proven to be very effective with regards to avoiding graduating and attaining the title of Dr. The purpose of this writ is to provide examples which can stimulate doctoral students' creativity with regards to self-directed activities in the fascinating field of graduation avoidance.

The safest strategy to avoid graduating is, of course, to ensure that the dissertation work never gets off the ground. Many doctoral students have adopted this strategy with great success. The effectiveness of this strategy depends primarily on how well you choose the alternative activity which will motivate not working on the dissertation. Since education at the PhD level contains a course section, an obvious course of action is to focus extensively on said courses, but given that this section can only be extended so far, it is imperative to not burn through it too quickly. Here, lessons learned from the early days of university life will come in handy.

In order to write a dissertation, a topic has to be chosen. This fact lies at the core of an excellent strategy for postponing graduation. When asked how the dissertation is coming along, the answer "I am currently in the process of choosing a topic" will provide extended cover from further uncomfortable lines of questioning. Much time can be devoted to interviewing different people in the selection process. Every and all suggestions should be carefully considered at great length, before finally (inevitably) being found lacking for this or that reason.

Another effective strategy for avoiding progress is the strategy of "but this is not a suitable thing to include in a dissertation". In short, this strategy consists of consistently refusing to accept that the thing keeping you busy at the moment is of sufficient interest or significance to warrant inclusion into the dissertation. This strategy is especially useful for doctoral students who have happened to be included in a research project. By letting the work pertaining to said project be entirely unrelated from the dissertation work, further progress can be postponed with impunity until the project has run its course.

The Penelope strategy

In the Odyssey, Odysseus' wife Penelope was besieged by a large number of suitors during his absence. She deftly avoided giving an answer one way or another by employing the following strategy. She promised to pick one of the suitors once she had completed the weave she was working on. Since she every night tore up the progress she had made to said weave during the day, she managed to never get closer to finishing it. The doctoral student seeking to avoid graduating have ample reason to see Penelope as a role model. Many of the strategies described below can be seen as variations upon Penelope's original strategy.

Of course, this strategy is difficult to apply literally in the context of a dissertation. To habitually burn the pages written during the day each evening would arouse suspicion. But remember:

No dissertation chapter is so good that it cannot withstand an extensive revision!

In other words, there is great potential for extending the dissertation writing process by constantly revising chapters. Additionally, this strategy can be varied: experiments can be redone (there will always be methodological flaws), and if the dissertation is based on gathered data there is always room for suddenly discovered it has to be replaced with different data, and so on.

Another variant is the "just a little bit more" strategy. That is, to suddenly discover that the dissertation requires just a little more material, a few more experiments, an additional literature review, and so on. This strategy has the distinct disadvantage of becoming less convincing over time.

Live dangerously!

As previously stated, literally burning the pages written during the day is an unconvincing approach. But a doctoral student can, by applying carefully considered systematic carelessness, significantly increase the chances of unfortunate incidents substantially slowing down their dissertation progress. For instance:

A time-honored method (cf the Wonderful Adventures of Nils) is to place the dissertation manuscript near an open window, especially during windy days. With luck, the manuscript can be distributed over a great geographical area using this method.

Briefcases and other bags which include dissertation manuscripts should be brought along everywhere to increase to probability of being lost or stolen.

A comprehensively implemented system of loose sheets significantly increases the chances of important chapters being lost, at least temporarily. Avoid putting labels on binders and floppy disks. This simple step can ensure important texts becomes inaccessible for years and years.

Another important rule, which applies to all above strategies, is to avoid keeping safety copies of the dissertation. This is especially effective when using a computer. A crashed and non-backuped hard drive can delay graduating for several years. If diskettes are used, the older, soft kind is recommended, especially in combination with bad disk readers and copious consumption of coffee.

How to avoid working on your dissertation

One category of strategies has the common trait of avoiding graduating by simply avoiding working on the dissertation altogether.  This category can be divided into two subcategories: manic and depressive strategies. Manic strategies consist of doing as much as possible which is completely unrelated to the dissertation. Depressive strategies consist of doing as little as possible overall. The two kinds of strategies suit different personalities to varying degrees, but there is nothing preventing you to mix and match. Correctly applied, they both amount to the same thing.

Manic strategies, or "Work promotes health and prosperity, and prevents many opportunities for research"

There are, in fact, many alternative activities a doctoral student can engage in to avoid working on the dissertation. These activities can be divided into academic and non-academic.

The academic activities are primarily all forms of institutional work. The major advantage of this kind of work is that engaging in it is highly socially accepted, and in many cases actually ends up being more appreciated than working on the dissertation. This includes teaching low-level courses and taking on various administrative tasks, which tend to be highly prioritized by the powers that be, and often have the additional quality of being in need of doing with brisk swiftness.

Activities relating to the student union and its various social functions (party committees and so forth) are other examples of excellent things to do to prevent dissertation progress. Helping your fellow doctoral students with their dissertations is an excellent activity with high graduation-postponing potential (for the helper, that is). (Conversely, one should of course avoid accepting too much help from other doctoral students, as this might inadvertently lead to making progress, or worse, graduating.)

In the non-academic world, we also find suitable activities: having a job (motivated by the student's economic situation), engagement in civil society, sports, evening courses, and so on. There is also the big Dissertation Delayer, particularly for women, known as the Family. This requires its own section, which is why we won't discuss it further at this point. Instead, we want to highlight romantic affairs as an activity with great potential to delay any and all dissertation-related progress.

Depressive strategies

Here, too, we can find a literary role model: the protagonist of the 19th century Russian author Goncharov's novel Oblomov. Oblomov spent most of his life in bed, meditating over all the nice things he would do once he managed to summon the energy to get up. With this role model in mind, you will surely find much inspiration in your efforts.

A depressive strategy worth its salt should not only prevent dissertation progress under the time it is deployed (if this is the correct term for doing nothing), but also contribute to the doctoral student's general discomfort and overall lack of capacity to perform. Physically moving as little as possible is an excellent principle with a high return on energy invested. (Note the manic corollary to this strategy: do all the sports! Everything that works, works!)

A drawback of going full Oblomov is that it is difficult to combine with having a clear conscience. Therefore, a modified depressive strategy is recommended. This consists of filling your time as inefficiently as possible. Here, there is no end to the possibilities:

Running errands at the bank, post office or other government institution are perfectly socially acceptable activities, which can gobble up a lot of time and effort, and have the additional benefit of having to be performed during office hours - i.e. the time usually spent working on the dissertation. Good planning can increase efficiency significantly. For instance, avoid running more than one errand at a time.

Things in need of repair can fill a lot of time which otherwise would have gone to writing. Especially effective is to employ plumbers or construction workers who do not arrive at the appointed hour.

Appointments to doctors or dentists (not to mention therapists, or better yet psychoanalysts) are excellent opportunities for making zero progress. The ideal is of course to pick practices that lie quite a distance away, scheduling mid-day appointments, so as to maximize the working time spent moving to and fro.


As mentioned above, the Family is an especially important potential dissertation delayer, especially for women. Here are some handy tips for exploiting this opportunity to its fullest extent.

  • The kids should be well-planned and well spaced, such that there will always be two or three toddlers in the house during the critical dissertation years.
  • Daycare centers and other such rational options should be avoided. If it can not be avoided, pick a daycare center committed to radical parental participation and community cooperation. If possible, pick two different daycare centers spaced far apart to increase time in transit. Also keep in mind that kids do not fare well by being more than six hours a day at the daycare! By carefully following this rule, it is possible to reduce efficient working ours per day to about five (or four, with sufficiently long transit times). (Alternatively, it is also possible to break this rule and instead spend the work hours nurturing feeling of guilt about this state of things, which is also an efficient way of reducing productivity). The ideal strategy, though, is to employ the good old play schools, whose three hour schedule make impossible any rational activity on the part of the responsible adult.
  • The non-dissertation writing parent should pick a job where being absent for even a single day is strictly impossible, combined with working hours which make dropping off and picking up of kids wholly the responsibility of the writing parent.
  • Plan your apartment such that secluded work spaces are avoided. The children should have access to as much of the apartment as possible. Placing the dissertation works pace in the shared bedroom is an efficient way of preventing work during night time, which otherwise holds the inherent potential of boosting the making of progress.
  • Pick a partner with little or no understanding of research and the conditions under which it is conducted. A hostile attitude towards research has a very significant potential for dissertation delayage, especially if it is combined with general dudebro machoism. Naturally, the kids too can be taught to hamper progress at every turn.
  • The strategy of living dangerously (see above) can be effectively applied at home too. Small children are particularly effective at destroying manuscripts and diskettes, if given the opportunity. Pets are viable substitutes for children. A cat, for instance, has a high probability of acting out on a strategically places manuscript pile.

How to best manage your adviser

The adviser is many times an obstacle facing a doctoral student wanting to avoid graduating. A lot is won by choosing the "correct" adviser (although this is sometimes as difficult as choosing correct parents). By "correct" we mean an adviser who either (i) leave the student alone, or (ii) participates, but whose input is sufficiently destructive to not accidentally contribute too much to the process.

To have the biggest chance to get an advisor of type (i), the following traits should be sought out: (a) senile, (b) alcoholic, (c) ignorant of the dissertation topic (if applicable, see above), and (d) disinterested in general. Fortunately, many universities boast a hearty supply of such persons.

Choosing an adviser of type (ii) is risky, since their destructive capacity sometimes affect the student in unpredictable ways. Properly handled, however, a type (ii) adviser can be efficiently employed in the dissertation delaying efforts. Especially if he (it's usually a he) can be used to cultivate a low sense of self-esteem (more on this later).

In the unfortunate case of getting an ambitious adviser with a constructive attitude towards dissertation writing, all hope is not lost: there is a wide range of strategies to employ to get around this. We will detail them below.

Defensive strategies: how to avoid your adviser

In order to satisfy the demands of the social setting and conscience, a doctoral student should seek out their adviser at least once per semester. Generally, dates which are not immediately connected to the deadline for student grant applications should be chosen, to avoid giving off the wrong (correct) impression. However: seeking out your adviser is not the same as actually meeting them. A careful study of their habits makes it possible to strategically pick times to call or knock when they are not available. Upon subsequent questioning of why you haven't talked to them, you can with a clear conscience refer back to your frequent failed attempts at communication - "I've been trying to get a hold of you all week, but you're never here". Another strategy is to refer to how busy they are, and how you didn't want to be a bother or intrude. A slightly ruder variant is to claim that you've previously made a deal that they would initiate contact.

It is of course sometimes necessary to avoid the general campus area, if the risk of bumping into them is too large. Upon chance encounters, it is advisable to have some other reason to be there, which can be used to deflect the question of how the dissertation is going.

If you have made an appointment, it is usually a good tactic to be there at exactly the appointed hour. Should the adviser be ever so slightly late, you can with a clear conscience claim to have been there (preferable leaving right after having placed a passive-aggressive post-it note on their door).

Offensive strategies: a good offense is the best defense

Some of the strategies in the previous section contained aspects of being on the offense, but it is always possible to go all in. The core principle of an offensive strategy is to disarm your adviser by placing them in a morally disadvantageous position, normally by instilling within them feelings of guilt. Here are some handy phrases to use when your adviser expresses displeasure at your rate of progress with the dissertation:

  • But you never read what I write anyway.
  • You only had bad things to say about the last draft.
  • When was the last time you wrote an article?
  • You are only going to use my results for your own ends.
  • Why haven't I received my funding?
  • There is no point in graduating, there are no jobs to be had anyway.
A strategy that is hard to counter is the upbeat strategy. It consists of happily denying any and all problems. Here are some variations:
  •  Sure thing, you will have the draft by tomorrow!
  • Yeah, it's been slow going, but now I'm really getting into it!
  • I suppose I could send in the chapter now, but I have so many great ideas, so I have to write them out as well!
A simultaneously offensive and divertive strategy might be called a social strategy. By, for instance, asking your adviser out to dinner just as they are about to launch into a serious discussion about your progress, you can get them off balance to such an extent that things do not progress further than that. More advanced variants of this strategy are left to the reader's imagination.

"Not today, but soon...": some ways of justifying why you have not finished the promised dissertation chapter

My ink ribbon snapped
(slightly more modern variant: my printer toner expired)
My mother in law turned 70
My son had a math test
My cat had kittens
I have to get the car to the repair shop
The metro is on strike
I'm waiting for an article from overseas
I'm waiting for a printout from the computer central
I'm waiting for comments from [insert name here]
I found a math error, so now I have to redo everything
I haven't been inspired
I'm in love
I have a cold
Was I really supposed to hand it in today?
I forgot the manuscript at home
My husband promised to post the manuscript, haven't you received it?

How to handle your extended social situation

Your adviser is not the only obstacle you will face in your effort to prolong your studies. Any long-term dissertation delaying stratagem has to include ways of deflecting questions and attacks from your extended social situation. Relatives, friends, acquaintances, and (lest we forget) colleagues and other doctoral students often tend to show a non-zero amount of interest in how you're doing and when you plan on graduating. You can, of course, employ the same strategies as have been outlined above. You also have the option of blaming your setbacks ("setbacks") on the incompetence or malignancy of your adviser (see the section on "Strategic paranoia" below). When it comes to non-academic relatives, the opportunities to strategically bamboozle abound, since they often do not know the specifics of what writing a dissertation actually entails.

The possibly most difficult proposition is to keep your fellow doctoral students out of the loop. They know the specifics of what writing a dissertation actually entails! But the experienced dissertation delayer knows no fear, and finds solutions to every situation. For instance, the Chutzpah-strategy can usually be gainfully employed, but requires having the personality to back it up. It simply consists of, at every possible opportune moment, declaring that the dissertation is almost 100% complete and that you're ready to defend it this very instant, would it be possible. Which it won't, because reasons, possible adviser-related. A slightly milder variant is the general boasting strategy, where you namedrop the prestigious persons who have read your manuscript and glimpsed the bright future to come.

Oftentimes, even simpler strategies can be successfully employed. Younger doctoral students will often find themselves distracted should you ask them a sufficiently specific question about this or that author.

"Get married, get divorced, join a club or something"

The heading is a quote from an old Hasse&Tage skit which makes fun of the kinds of vapid relationship advice put on offer in tabloids. It just so happens that these very same vapid pieces of advice work marvelously as strategies for delaying your dissertation. The basic principle is that any and all life changes draw time and energy away from the dissertation, and thus fulfill the objective of delaying it. The general strategy can be formulated as "Change", where the thing to be changed can be chosen arbitrarily. For instance:

  • Change partner
  • Change place of residence
  • Change job
  • Change car
  • Change adviser
  • Change computer
  • Change word processor
That last change is particularly effective. It takes a lot of time to learn the new program, and even more time to convert all the old files to the new format. The biggest, most classic change you can pull is
  • Change dissertation topic
It is very possible to apply this strategy iteratively (which is to say, several times). Over time it tends to lose effectiveness and become a source of annoyance among your peers in general, and with your adviser in general. Your chances of success increase if you can back it up with reference to someone else already doing what you were doing, or better yet if someone else has already done it.

How to cultivate low self-esteem

A genuinely abysmal self-esteem is an invaluable resource for a disputation delayer. The challenge is to cultivate it in the desired direction, without accidentally allowing constructive input from your peers to hamper the process.

The core of the bad self-esteem is a hypothesis about reality, specifically that you as a person is insufficient. In this context, it can be formulated thusly:
  • I will never be able to complete this dissertation
The philosophers of science tell us that hypotheses can be "immunized" against falsification. This is an important strategic moment for doctoral students. They have to learn to deal with any and all information contradicting the big hypothesis and find smaller, supporting hypotheses that help explain it all away. For instance: if your adviser praises a dissertation chapter, you can make one of the following assumptions which remove the validity of the positive information:
  • They're just saying that to make me not drop out
  • They haven't read it correctly
  • They don't understand any of this anyway
  • (if applicable) They probably just want to seduce me
Another strategy is of course to completely avoid situations wherein one might be exposed to positive information. For instance by avoiding to hand in your manuscript for evaluation. It is also important to avoid giving seminars and other presentations, especially at conferences, where you might (woe betide) become famous outside your own university.

Strategic paranoia

A paranoid outlook on life can also be an asset for an intrepid dissertation delayer. One advantage is that it removes the necessity of a negative self-image (which can be quite painful to carry around with you), by placing the blame for one's failures to the (supposedly) hostile situation at large. The core principle for the paranoid explanatory model is that "it won't even matter if I try, since everyone is going to actively try to undermine my efforts, due to p", where p is a proposition about the world at large. We will now exemplify some possible values for p.

  • I am an immigrant
  • I am a woman (or, of applicable, a man)
  • I am working class (or, if applicable, upper class)

Sexual harassment
  • My adviser wants to get revenge on me because I rejected their advances

  • My adviser is jealous because I am smarter than them

Wrong paradigm
  • Everyone at this university follows the X school of thought, while I follow the Y school 

Creative paralysis

 A true dissertation delayer have to master the subtle art of placing themselves in a state of creative paralysis. This state can be achieved in a multitude of ways. A primary set of strategies consists of making further progress contingent on some factor or event outside your direct control. You might, for instance, ask the most busy, least cooperative technician at the shop to come fix your computer. Whilst awaiting said computer to be fixed (whilst also not being too hasty in reminding said technician to come fix it), it simply is not possible to continue working. Similarly, it is fair game to await a colleague's response to the current draft, to await a statistician to double-check the data, and so on and so forth.

Another variant is to have some problem which one really ought to tackle, but which for some reason or other is too much effort to do right this instant. Bibliographical references is an excellent example. You find a reference in a bibliography to a book that is not in your local library, and then spend several depressive weeks gathering energy to go to the one other nearby library where you know for sure the book awaits. (To be sure, there are interlibrary loans, but those take a long time. Also, where even are the forms to fill out for such loans anyway?) Even better is of course to have a reference to a work which 100% most certainly contains information absolutely essential to the dissertation, but which can not be sought out. Such a state of things can delay progress for years on end.

Shorter bursts of creative paralysis can be of use, too. "There is too little time left to do anything meaningful anyway" is a particularly effective method in this context.

Let us conclude this summary of useful avoidance strategies by reminding you, dear reader, of a number of distractions which can be employed to interrupt of delay a working session:
  • Computer games
  • Beautiful weather outside
  • Major sporting events on TV
  • Phone calls
  • Cleaning your desk
  • Water the flowers
  • Visit the loo
  • Check your mail


Appendix: schedule for two typical days of a doctoral student 

The following work schedules are representative cases based on empirical studies which will be presented in my upcoming dissertation (Ask, forthcoming). The most important finding of these investigations is that the effective working time per day for doctoral students of all personality types trends asymptotically towards a period of time which I refer to as Ask's constant, which equals to exactly 29 minutes.

1. Typical workday for depressive doctoral student
10.00 Wakes up
10.00-10.30 Meditates over the perils and hardships of being a doctoral student
10.30-10.45 Gets dressed
10.45-11.30 Eats breakfast and reads the newspaper
11.30-12.00 Looks for a lost article
12.00 Leaves home
12.10 Misses the bus; the next bus arrives in half an hour
12.10-12.40 Awaits the next bus
12.40-13.00 In transit to university
13.00-13.15 Discusses the perils and hardships of being a doctoral student with a peer
13.15-14.10 Lunch
14.10 Finds all reading spots in the library already taken; goes to cafeteria
14.40 Returns to library; finds reading spot; claims it
14.45-14.55 Queues to retrieve a book
14.55-14.59 Sharpens pencil
14.59-15.04 Visits the loo
15.35-15.50 Smoke break
15.51 Returns to reading spot
15.52 Realizes that today is the last day to pay the rent, and that the bank closes half past four
15.53 Leaves the reading spot
15.54-17.30 Miscellaneous errands
17.30 Returns home; tired to the bone
17.30-18.00 Reads the evening news
18.00-18.30 Prepares an evening meal
18.30-19.15 Eats the evening meal
19.15-19.30 Does the dishes
19.30-20.00 Watches the national news on television
20.00-22.00 Really ought to do more dissertation work, but gets stuck watching a movie on TV
22.00 Falls into bed, exhausted
Effective dissertation work time: 29 minutes. (=Ask's constant)

2. Typical workday for manic doctoral student
07.00 Wakes up, immediately arises and gets dressed
07.10-07-40 Morning gymnastics
07.40-07.50 Breakfast, reads the news
07.50-08.10 Bikes to campus
08.10-09.00 Prepares for teaching
09.00-10.30 Teaches
10.30-10.40 Drinks coffee
11.00-11.55 Meeting with student union
11.55-12.30 Lunch
12.30-14.00 Substitutes for sick guidance counsellor
14.00-14.45 Meeting with the faculty work group for increasing graduation rates among doctoral students
14.45-15.45 Plays badminton
15.45-15.50 Drinks coffee
16.00-17.00 Listens to guest lecture
17.00-18.00 Prepares post-lecture seminar (buys wine)
18.00-21.00 Participates in post-lecture seminar, engages in conversation with the guest speaker 
21.00-21.20 Bikes home
21.20-22.00 Grades essays
22.00 Falls into bed, exhausted
Effective dissertation work time: 29 minutes. (=Ask's constant)

Translator's note

This is a translation of Sam Ask's seminal work. The version upon which this translation is based can be found here. Some things have been changed to make sense to an international audience; others have been left intentionally inexplicable, as reminders of a time when things were different.