I find myself thinking about systematic literature reviews these days. It is an unexpected thing to be randomly thinking about, to be sure, so I guess that means I'm officially an academic now. My habitus is augmented.
The quickest way to introduce systematic literature reviews is through a detour to unsystematic literature reviews. The unsystematic approach is easy to grasp: you simply grab a hold of any books or articles that seem relevant and start reading. At the other end of the reading process, you know more than you did before. This is generally a good way to go about learning (especially if you have a nice local library to draw from), and should not be underestimated.
It is not, however, systematic.
The lack of systematicity is something of a problem, though. Not to the learning process, mind, but to the performative aspect of being an academic. It is not cool or hip to say that you've read a lot of books and keep tabs on new articles in your field, and thus know a thing or two. This is not the image of a structured, rigorous and disciplined scientific mind that academia wants to project (both to itself and to the public), so something has to be done. A system has to be created, to let everyone involved claim that they followed proper procedure and did not leave things to chance. Thus, systematic literature reviews.
Depending on where you are in the process, the systematic approach can take many guises. If you are just learning about science and scientific literature, having a system in place to guide you through the reading is immensely helpful. It gives permission to look at a search result of 2931 articles and cut it down to a more manageable number. If it is a robust system, it specifies that search engines giveth what you asketh, and that you probably should be more specific in your search. Moreover, knowing which questions to ask the articles beforehand gives a structure to the reading, and allows for paying closer attention to the important parts. And so on, through all the steps. Having a template to follow answers a lot of questions, even if you find yourself deviating from it.
When you've been at being an academic for a while, the presence of an adopted system can shield you from the burden of overreading. There are always more books and articles than can be readily read, and every text ever written can be criticized on the basis of not taking something into account. By using the system, the age-old question of "why did you choose to include these texts but not these other texts" can finally be put to rest. The systematic literature review unburdens the load by defining exactly which texts are relevant and which are not. And thus, the rigorous and disciplined reading can commence, conscience clear.
Next up the abstraction ladder, we find another use of these systematic reviews. When research has to be summarized and administrated, it simply will not do to go with something as unscientific as a gut feeling. The scientists involved might know what's what, but this intricate insider knowledge is not easily translated into something outsiders can partake of. Outsiders, such as the non-scientist bureaucrats put in place to administrate the funding mechanisms that determine which research efforts are awarded grants and which do not. By strategically employing review systems that include desired results (and exclude undesired results), funding can be directed in certain directions under the guise of impartial systematicity. Administrators (or their superiors) can claim all the academic benefits of rigorously following the method laid out for all to see, while at the same time subtly steering research efforts without having to be explicit about it. It is systematic, disciplined and impartial, whilst also being ruthlessly political.
The key takeaway here is not that systematic literature reviews are bad (problematic, maybe, but not bad). Rather, it is a reminder that the presence of a system does not in itself guarantee a robust outcome. Like all methodologies, there are strengths and weaknesses to consider in each particular case, sometimes more obvious than not. When a systematic review finds that only articles published by (say) economists are relevant to a particular issue, despite decades of scholarly publishing on the subject on other disciplines, the issue is not a lack of systematicity, but too much of it. A flawless execution of review methodology does not preclude asking what is up with such unrepresentative results.
I find it amusing that strategic and rhetorical dimensions of academia are obscured by reference to systematicity and specialized vocabulary (the terminology surrounding systematic literature reviews is something to behold). Not least because academics are the very people best positioned to problematize the living bejeezus out of just these kinds of subtle processes.
It's funny that way.