Systematic reviews and meta-analysis are essential tools for synthesizing the evidence needed to inform decision-making. Systematic reviews summarize the available literature using specific search parameters, followed by a critical assessment and logical synthesis of multiple primary studies.
Meta-analysis refers to the statistical analysis of data from independent primary studies focused on the same issue, whose objective is to generate a quantitative estimate of the phenomenon studied, for example, the effectiveness of the intervention. In clinical research, systematic reviews and meta-analyses are a fundamental part of evidence-based medicine. However, in basic science, attempts to evaluate previous literature so rigorously and quantitatively are rare, and narrative reviews prevail.
How to Develop Meta-Analyses
Meta-analysis can be a challenging undertaking, requiring tedious review and statistical understanding. Software packages that support meta-analysis include the Excel add-ins MetaXL and Mix 2.0, Revman, Comprehensive Meta-Analysis Software, JASP, and the MetaFOR library for R. Although these packages can be adapted to basic science projects, difficulties may arise due to the specific characteristics of basic science studies, such as large and complex data sets and the heterogeneity of the experimental methodology.
Validity of tests in the basic sciences
To assess the translation potential of basic research, the validity of the evidence must first be assessed, usually by examining the approach taken to collecting and evaluating the data. Studies in basic sciences are broadly grouped as hypothesis generators and hypothesis-based. The former tend to be principle test studies with small samples and are usually exploratory and less valid than the latter.
It can even be argued that studies reporting novel results also belong to this group, as their results remain subject to external validation before being accepted by the scientific community at large. On the other hand, hypothesis-based studies are based on what is known or what previous work suggests. These studies can also validate previous experimental findings with incremental contributions. Although these studies are often overlooked and even discarded due to a lack of substantial novelty, their role in external validation of previous work is critical in establishing the translational potential of the findings.
Selection of the Experimental Model
Another dimension of the validity of tests in the basic sciences is the selection of the experimental model. The human condition is almost impossible to recapitulate in a laboratory setting, so experimental models (e.g., cell lines, primary cells, animal models) are used to mimic the phenomenon of interest, albeit imperfectly. For these reasons, the best quality evidence comes from evaluating the performance of several independent experimental models.
This is achieved through systematic approaches that consolidate the evidence from multiple studies, thus filtering the noise signal and allowing comparison separately. While systematic reviews can be conducted to achieve a qualitative comparison, meta-analytical approaches employ statistical methods that allow the generation and testing of hypotheses.
When a meta-analysis in the basic sciences is based on a hypothesis, it can be used to assess the translational potential of a given outcome and provide recommendations for further clinical and translational studies. On the other hand, if the hypothesis tests of the meta-analysis are inconclusive, or if exploratory analyses are performed to examine the sources of inconsistency between the studies, new hypotheses can be generated, and subsequently tested experimentally
Search and selection strategies
The first stage of any review involves the formulation of a primary objective in the form of a question or research hypothesis. Reviewers should explicitly define the objective of the review before starting the project, which serves to reduce the risk of data dredging, in which reviewers subsequently assign meaning to significant findings (Kwon et al, 2015).
Secondary objectives can also be defined; however, caution should be exercised, as the search strategies formulated for the primary objective may not fully cover the set of work required to address the secondary objective. Depending on the objective of a review, reviewers may choose to conduct a rapid or systematic review. Although the meta-analytical methodology is similar for systematic and rapid reviews, the scope of the evaluated literature tends to be significantly narrower for rapid reviews, allowing the project to move forward more quickly.
Systematic review and Meta-analysis
Systematic reviews involve comprehensive search strategies that allow reviewers to identify all relevant studies on a defined topic (DeLuca et al., 2008). Meta-analytical methods allow reviewers to quantitatively evaluate and synthesize study results to obtain information on statistical significance and relevance.
Systematic reviews of basic research data have the potential to produce information-rich databases that allow for extensive secondary analysis. To thoroughly examine the body of information available, the search criteria must be sensitive enough not to overlook relevant studies. Key terms and concepts that are expressed as synonymous keywords and index terms, such as Medical Matter Headings (MeSH), must be combined using the Boolean operators AND, OR, and NOT.
Refining the Strategy
Truncations, wildcards, and proximity operators can also help refine a search strategy by including spelling variations and different redactions of the same concept (Ecker and Skelly, 2010). Search strategies can be validated by a selection of relevant planned studies. If the search strategy fails to retrieve even one of the selected studies, the search strategy requires greater optimization.
This process is repeated, updating the search strategy at each iterative step until the search strategy works at a satisfactory level. An exhaustive search is expected to return a large number of studies, many of which are not relevant to the topic, which commonly results in a specificity of < 10%. Therefore, the initial screening stage of the library to select relevant studies is time-consuming (it can take from 6 months to 2 years) and is prone to human error.
At this stage, it is recommended to include at least two independent reviewers to minimize selection bias and related errors. However, systematic reviews have the potential to provide the synthesis of higher quality quantitative evidence to inform directly basic, preclinical and translational experimental and computational studies.
Quick Review and Meta-Analysis
The goal of rapid review, as the name suggests, is to decrease the time it takes to synthesize information. Rapid reviews are a suitable alternative to systematic approaches if reviewers prefer to get a general idea of the state of the field without a large investment of time. Search strategies are constructed by increasing the specificity of the search, thus reducing the number of irrelevant studies identified by the search at the expense of the completeness of the search (Haby et al., 2016).
The strength of a quick review lies in its flexibility to adapt to the needs of the reviewer, resulting in a lack of standardized methodology. Common shortcuts that are made in quick reviews are:
(i) Restrict search criteria.
(ii) Impose date restrictions.
(iii) Perform the review with a single reviewer.
(iv) Omit the consultation of experts (i.e. the librarian for the development of the search strategy), (v) restrict language criteria (for example, in English only).
(vi) Renounce the iterative process of searching and selecting search terms.
(vii) Omit the criteria from the quality control list and (viii) limit the number of databases you search.
These shortcuts will limit the initial set of studies returned by the search, thus speeding up the selection process, but they can also lead to the exclusion of relevant studies and the introduction of selection bias. Although there is consensus that rapid reviews do not sacrifice quality or synthesize unrepresentative results, it is recommended that critical results be subsequently verified by a systematic review.
However, rapid revisions are a viable alternative when it is necessary to estimate the parameters for computational modeling. Although systematic and rapid reviews are based on different strategies for selecting relevant studies, the statistical methods used to synthesize data from the systematic and rapid reviews are identical.
Screening and selection
Once the bibliographic search is finished (it is necessary to record the date on which the articles were retrieved from the databases), the articles are extracted and stored in a reference manager for screening. Prior to screening of studies, inclusion and exclusion criteria should be defined to ensure consistency in the identification and recovery of studies. This is especially done when multiple reviewers are involved. The critical steps in screening and selection are:
(1) The elimination of duplicates.
(2) Screening of relevant studies by title and abstract.
(3) Inspection of full texts to ensure that they meet the eligibility criteria.
There are several reference managers available, such as Mendeley and Rayyan, developed specifically to assist in the selection of systematic reviews.
However, 98% of authors claim to use Endnote, Reference Manager or RefWorks to prepare their reviews (Lorenzetti and Ghali, 2013). Reference managers typically have deduplication functions; however, these can be tedious and error-prone (Kwon et al., 2015).
Protocol in Endnote
A protocol for faster and more reliable deduplication in Endnote has recently been proposed (Bramer et al., 2016). The selection of articles should be broad enough not to be dominated by a single lab or author. In basic research articles, it is common to find datasets that are reused by the same group in multiple studies.
Therefore, additional precautions should be taken when deciding to include multiple studies published by the same group. At the end of the search, screening, and selection process, the reviewer gets a complete list of eligible full-text manuscripts. The entire screening and selection process should be reported in a PRISMA diagram. It traces the flow of information throughout the review according to prescribed guidelines published elsewhere.
Our specialists wait for you to contact them through the quote form or direct chat. We also have confidential communication channels such as WhatsApp and Messenger. And if you want to be aware of our innovative services and the different advantages of hiring us, follow us on Facebook, Instagram or Twitter.
If this article was to your liking, do not forget to share it on your social networks.
You may also be interested: A research develops prediction models to optimize grape production in O Ribeiro
Bramer, W.M., Giustini, D., de Jonge, G.B., Holland, L., and Bekhuis, T. (2016). De-duplication of database search results for systematic reviews in EndNote. J. Med. Libr. Assoc. 104, 240–243. doi: 10.3163/1536-5050.104.3.014
Lorenzetti, D. L., and Ghali, W. A. (2013). Reference management software for systematic reviews and meta-analyses: an exploration of usage and usability. BMC Med. beef. Methodol. 13, 141–141. doi: 10.1186/1471-2288-13-141
Kwon, Y., Lemieux, M., McTavish, J., and Wathen, N. (2015). Identifying and removing duplicate records from systematic review searches. J. Med. Libr. Assoc. 103, 184–188. doi: 10.3163/1536-5050.103.4.004