Journal Information
Vol. 25. Issue 3.
Pages 233-241 (01 May 2021)
Share
Share
Download PDF
More article options
Visits
3869
Vol. 25. Issue 3.
Pages 233-241 (01 May 2021)
Full text access
Factors associated with the reporting quality of low back pain systematic review abstracts in physical therapy: a methodological study
Visits
3869
Dafne Port Nascimentoa,
Corresponding author
dafnepn@yahoo.com.br
dafnepn@gmail.com

Corresponding author at: Masters and Doctoral Programs in Physical Therapy, Universidade Cidade de São Paulo, Rua Cesário Galeno 448, Tatuapé, São Paulo CEP 03071-000, Brazil.
, Gabrielle Zoldan Gonzaleza, Amanda Costa Araujoa, Anne Moseleyb,c, Christopher Maherb,c, Leonardo Oliveira Pena Costaa
a Masters and Doctoral Programs in Physical Therapy, Universidade Cidade de São Paulo, São Paulo, SP, Brazil
b The University of Sydney, Camperdown, Sydney, NSW, Australia
c Institute for Musculoskeletal Health, Sydney School of Public Health, Faculty of Medicine and Health, Sydney, NSW, Australia
This item has received
Article information
Abstract
Full Text
Bibliography
Download PDF
Statistics
Figures (1)
Tables (4)
Table 1. Characteristics of the included reviews.
Table 2. Number and percentage of fully reported items from (PRISMA-A3) for the abstract section of the included reviews.
Table 3. Consistency between abstracts and full texts for each item of the PRISMA-A.3 based on Kappa coefficient for 66 reviews.
Table 4. Multivariate model of associations between review characteristics and the “total number of PRISMA-A fully reported items” (n = 66).
Show moreShow less
Additional material (1)
Abstract
Background

Abstracts of systematic reviews (SR) are frequently used to guide clinical decision-making. However, if the abstract is inadequately reported, key information may be missing and it may not accurately summarize the results of the review.

Objective

We aimed to investigate 1) if abstracts are fully reported; 2) if abstract reporting is associated with review/journal characteristics in physical therapy for low back pain (LBP); and 3) if these abstracts are consistent with the corresponding full texts.

Methods

We searched the Physiotherapy Evidence Database for SRs in physical therapy for LBP published between 2015 and 2017. Associations between abstract reporting quality and review/journal characteristics were explored with linear regression. Abstract reporting was assessed with the 12 item Preferred Reporting Items for Systematic Reviews and Meta-Analyses for abstracts (PRISMA-A) checklist. Consistency of reporting between abstracts and the full text was evaluated by comparing responses to each item of the PRISMA-A using Kappa coefficients. Methodological quality of the reviews was assessed with A MeaSurement Tool to Assess systematic Reviews (AMSTAR-2).

Results

We included 66 SRs, 9 Cochrane and 57 non-Cochrane. Review methodological quality ranged from ‘high’ (8%) to ‘critically low’ (76%). The mean ± SD of the “total number of PRISMA-A fully reported items” (range 0–12 points for fully reported items) was 4.1 ± 1.9 points for non-Cochrane review abstracts and 9.9 ± 1.1 points for Cochrane abstracts. Factors associated with reporting quality of abstracts were: journal impact factor (ß 0.20; 95% CI: 0.06, 0.35), number of words in abstract (ß 0.01; 95% CI: 0.00, 0.01) and review methodological quality (‘critically low’ with ß −3.06; 95% CI: −5.30, −0.82; with ‘high’ as reference variable). There was typically inconsistent reporting between abstract and full text, with most Kappa values lower than 0.60.

Conclusions

The abstracts of SRs in physical therapy for LBP were poorly reported and inconsistent with the full text. The reporting quality of abstracts was higher in journals with a higher impact factor, in abstracts with a greater number of words, and when the review was of higher methodological quality.

Keywords:
Abstracts
Data accuracy
Low back pain
Methods
Systematic reviews
Full Text
Introduction

Abstracts of systematic reviews (SRs) are frequently used to guide clinical decision-making.1,2 However, if the abstract is inadequately reported, key information may be missing and it may not accurately summarize the results of the review.3 The Preferred Reporting Items for Systematic Reviews and Meta-Analyses for abstracts checklist3 (PRISMA-A) was created to guide better reporting of SR abstracts. Despite the publication of PRISMA-A,3 the reporting quality of SR abstracts in psoriasis,4 neurosurgical,5 and general medicine6 remains inadequate. Readers are not able to reliably assess study findings if the abstract is poorly reported or presents inconsistencies with the full text.

It is desirable that the abstract is consistent with the full text, i.e. there is an acceptably high level of agreement between the reporting and interpretation of results contained in the abstract and the corresponding full text.7,8 A survey of biomedical reviews indicated the need for further investigation and actions to decrease reporting inconsistencies between abstracts and full texts.8 Previous studies have analyzed the presence of overstated results (‘spin’)9,10 in SR abstracts and the methodological quality11 of SRs in the field of physical therapy for low back pain (LBP), but no analyses on abstract reporting quality has been reported so far for this field.

Our focus was on LBP as it is the most burdensome health condition globally in terms of disability.12–15 Additionally, we focused on physical therapy interventions because they are the most commonly used non-pharmacological and non-surgical treatment for LBP.14,16–19 Therefore, the objectives of this study were to investigate: 1) if abstracts are fully reported; 2) if abstract reporting is associated with review/journal characteristics; and 3) if these abstracts are consistent with the corresponding full texts.

MethodsData sources and searches

This methodological study of SRs evaluating physical therapy interventions for treating LBP is a secondary analysis of previous research.9 The search was performed in the Physiotherapy Evidence Database (PEDro; www.pedro.org.au) on 10th of January 2018. We chose PEDro because it is one of the most complete indexes in the field of physical therapy.20 Two independent authors performed the screening process. Our search strategy was: “systematic review” for method; “lumbar spine, sacroiliac joint or pelvis” for body part; “pain” for problem; 2015–2017 for year of publication. The time period was selected because it is two years after the publication of the PRISMA-A, therefore authors had a greater opportunity to adhere to these recommendations than in older SRs. Eligibility criteria were: full journal publications (not abstract only); systematic reviews of clinical trials that evaluated the effectiveness of one or more physical therapy interventions for LBP; published between 2015 and 2017; written in English, Spanish, or Portuguese; and with no restrictions for type of analysis (whether the review performed a meta-analysis or not).

Data extraction

Two independent authors evaluated the methodological quality, reporting quality of abstracts, and if abstract reporting was consistent with the full text reporting for the included reviews. They also extracted several review and journal characteristics from the reviews (between March and May 2018). Two authors discussed disagreements until consensus was reached, if consensus could not be achieved a third author provided arbitration.

The following descriptive data were extracted: (1) language of publication (English/ Spanish/ Portuguese); (2) year of publication to calculate the age (in years) of each review by subtracting the year of publication from 2018 (descriptive); (3) review open access on PubMed Central (yes/no); (4) number of collaborating centers based on the number of departments mentioned in the author affiliations (descriptive); (5) which risk of bias tool was used (descriptive); (6) number of studies and types of study designs included in the review (randomized controlled trials/ other types of studies); and, (7) the number of times the review had been cited (descriptive), as downloaded from the Web of Knowledge (4th September 2018), and normalized by dividing by the age of the review. The journal features were: (1) mention of PRISMA-A in the “Instructions to authors” (yes/no); (2) if endorsement of PRISMA for full texts checklist21 was compulsory (yes/no); (3) the word limit for the abstract (descriptive), extracted from the “Instructions to authors”; and, (4) open access journal (yes/no), obtained from the Directory of Open Access Journals or the Journal’s website. The variables extracted as ‘descriptive’ were coded in order of appearance. We also coded the descriptive data that were ‘not reported’ or ‘not applicable’.

Abstracts reporting

The 12-item PRISMA-A checklist was used to evaluate the reporting quality of the abstract of each included SR. Each item was classified as “fully reported” (contained all information specified in the item) or “not reported” (contained some of the information specified in the item, or contained none of the information specified in the item). A score was generated for each review (called the “total number of PRISMA-A fully reported items”) by counting the number of items that were “fully reported”. This score ranged from 0 (low reporting quality) to 12 (high reporting quality) and has been used in previous studies.4,22

Consistency of reporting comparing abstract to the full text

Consistency of reporting was evaluated by comparing the reporting in the abstract to the reporting in the full text of the review. To achieve this, the full text was also evaluated with the PRISMA-A so that PRISMA-A results were obtained for the abstract and also for the full text for each systematic review. We then calculated agreement between abstract and full text for each of the 12 items from the PRISMA-A.

Methodological quality assessment

The 16-item A MeaSurement Tool to Assess systematic Reviews (AMSTAR-2)23 checklist was used to evaluate the methodological quality of the included reviews. Each item was classified as: “no”, “yes”, “partial yes”, or “not applicable”.

The classification of the methodological quality for each review was based on the AMSTAR-2 scores,23 as follows: “high” (no or one non-critical weakness), “moderate” (more than one non-critical weakness), “low” (one critical flaw with or without non-critical weakness), or “critically low” (more than one critical flaw with or without non-critical weakness). Non-critical weaknesses were related to items 1, 3, 5, 6, 8, 10, 12, 14 and 16; and critical flaws to items 2, 4, 7, 9, 11, 13 and 15.

Associations between abstracts reporting and review and journal characteristics

To investigate if review and journal characteristics were associated with quality of abstract reporting we defined one dependent variable and six independent variables that had been previously identified in the literature.24–26 The dependent variable was the “total number of PRISMA-A fully reported items” (range 0–12 points; described above, under ‘Abstracts reporting’). Three independent variables were categorical: AMSTAR-2 methodological quality score (as dummy variables for ‘high’, ‘moderate’, ‘low’, or ‘critically low’ – described above, under ‘Quality Assessment’); abstract free of spin (categorized as dichotomized variables “yes” of “no”, based on the SPIN 7-item checklist27 described below); and PRISMA for full texts checklist21 endorsed by the publishing journal and extracted from the “Instructions to authors” for each journal or from the PRISMA website28 (“yes” or “no”). Three independent variables were continuous: number of words in each review abstract (counted using the Microsoft Word “Word Count” function), 2017 journal Impact Factor (JIF), downloaded from the InCites Journal Citation Reports’ website (if a journal was not listed in this website, it was coded as not having an JIF, i.e. ‘missing value’), and number of citations, downloaded from the Web of Science Clarivate Analytics (normalized by the number of years since publication).

The literature has defined spin as “a misrepresentation of study results, regardless of motive (intentionally or unintentionally) that overemphasizes the beneficial effects of the intervention and overstates safety compared with that shown by the results” [p. 2].29 We used a 7-item checklist27 to detect the presence of spin in an abstract, by comparing the interpretation reported in the abstract conclusion and title to the abstract results: 1) recommendations not supported by the findings; 2) title claims a beneficial effect not supported by the findings; 3) selective reporting of outcomes or analysis; 4) conclusion claims safety based on non-statistically significant results; 5) conclusion claims beneficial effects despite high risk of bias; 6) selective reporting of harm outcomes; and 7) the conclusion extrapolates the review’s findings to a different intervention. Each item was classified as “yes” (i.e., spin was present), “no” (i.e., spin was not present), or “not reported” (i.e., the item evaluated was not reported, or could have been omitted, intentionally or not). We classified the abstract free of spin if none of the seven items was present.

Data analysis

Data were reported for all reviews and stratified into Cochrane and non-Cochrane reviews. We separated the data for Cochrane and non-Cochrane reviews, because previous research indicated that Cochrane reviews of physical therapy interventions have better reporting and methodological quality than non-Cochrane reviews.30 The methodological quality of reviews was reported in frequencies of “high”, “moderate”, “low”, and “critically low” scores (AMSTAR-2). We used IBM SPSS software Version 20.0 for all analyses (Boston, MA, USA).

Abstracts reporting

We calculated the percentage of review abstracts achieving each item of the PRISMA-A, as well as mean and standard deviation (SD) of the “total number of PRISMA-A fully reported items”.

Consistency of reporting comparing abstract to the full text

Consistency of reporting was evaluated by Kappa coefficients31 for items 2–12 of the PRISMA-A checklist (item 1 was omitted as it relates to the title). Kappa values were interpreted using the following criteria: <0 “less than chance”; 0.01–20 “slight”; 0.21−0.40 “fair”; 0.41−0.60 “moderate”; 0.61−0.80 “substantial”; and, 0.81−0.99 “almost perfect” agreement.31 An item was considered consistent if Kappa values were 0.61 or higher (i.e., “substantial” to “almost perfect” agreement).

Associations between quality of abstract reporting and review and journal characteristics

We performed multivariate linear regression analyses to evaluate the association between reporting quality of the abstract and six independent variables. We built a multivariate regression model and included independent variables achieving P < 0.05. Adjusted explained variance (adjusted R2) and Beta coefficient (ß) and its 95% confidence intervals (CIs) were calculated for each possible variable with association. Scatter plots of the “total number of PRISMA-A fully reported items” (dependent variable) with each independent variable were created to asses linear relationship between variables to fit our model.32 Coefficient values of r ≥ 0.7 analyzed by bivariate correlation (multicollinearity) were not included in our linear regression model.33

Results

On 10th of January 2018 there were 7526 reviews indexed in the PEDro database. Our search retrieved 126 reviews that could fulfill our eligibility criteria (Fig. 1) and screening was performed throughout their full texts. Our final sample included 66 eligible reviews (Supplemental Online Material A: https://osf.io/xfcy7/) and the remaining 60 were excluded because they did not fulfill our eligibility criteria (Supplemental Online Material B: https://osf.io/69aw5/). The 66 reviews were published in 42 journals, the journal names along with the JIF, number of included reviews published in the journal, open access policy, and endorsement of the PRISMA for full text checklist21 are presented in Supplemental Online Material C: https://osf.io/jk7ew/.

Figure 1.

Flow chart of included reviews.

(0.21MB).

All included reviews were published in English, 60% were open access (41% of journals were open access) and 92% of reviews received citations (mean number of citations: 4.7 ± 7.8). Most reviews were written by a multicenter collaboration (73%) and were conducted in Europe (29%), Oceania (24%), and North America (21%). The mean number of included studies in the reviews was 14.4 ± 15.9, of which 12.6 ± 14.8 were randomized controlled trials and 1.7 ± 4.0 used other study designs. Descriptive data for the 66 included reviews, also separated into non-Cochrane (n = 9) and Cochrane (n = 57), are shown in Table 1.

Table 1.

Characteristics of the included reviews.

Variables  Total sample (n = 66)  Non-cochrane reviews (n = 57)  Cochrane reviews (n = 9) 
Age of review (years)  2.1 ± 0.7  2.1 ± 0.7  2.1 ± 0.8 
Published in 2015  21 (32%)  18 (32%)  3 (33%) 
Published in 2016  30 (46%)  26 (46%)  4 (44%) 
Published in 2017  15 (23%)  13 (23%)  2 (22%) 
JIF 2017  4.0 ± 4.8  3.6 ± 5.1  6.8 ± 0.0 
Journals with an impact factor  61 (92%)  52 (91%)  9 (100%) 
Journals without an impact factor  5 (8%)  5 (9%)  0 (0%) 
Number of citations normalized by age  4.7 ± 7.8  4.6 ± 8.1  5.3 ± 5.8 
Reviews that had been cited  61 (92%)  53 (93%)  9 (100%) 
Reviews that had not been cited  5 (8%)  5 (9%)  0 (0%) 
Number of words in the abstract  #259 [208−346]  #252 [205−306]  #747 [569−836] 
Maximum words permitted in the abstract  280 ± 80  259 ± 67  400 ± 0 
Reviews that had to meet a word limit  61 (92%)  52 (91%)  9 (100%) 
Reviews that did not have to meet a word limit  5 (8%)  5 (9%)  0 (0%) 
Reviews published in journals with abstract word limits       
Reviews that adhered to word limit  25 (38%)  24 (42%)  0 (0%) 
Reviews that used less words than the limit  19 (29%)  19 (33%)  0 (0%) 
Reviews that exceeded the limit  18 (27%)  9 (16%)  9 (100%) 
Reviews published in journals without abstract word limits  5 (8%)  5 (9%)  0 (0%) 
Abstracts free of spin  13 (20%)  6 (9%)  7 (11%) 
Risk of bias tool used       
Cochrane risk of bias tool  41 (62%)  32 (56%)  9 (100%) 
Physiotherapy Evidence Database scale  7 (11%)  7 (12%)  0 (0%) 
Downs and Black Quality Index  2 (3%)  2 (4%)  0 (0%) 
Jadad score  2 (3%)  2 (4%)  0 (0%) 
Other  9 (14%)  9 (16%)  0 (0%) 
Did not use a risk of bias tool  4 (6%)  4 (7%)  0 (0%) 

Note: All data are means ± standard deviations, n (%), and # median [25th - 75th percentile]. Journal Impact Factor (JIF).

The AMSTAR-2 scores for methodological quality were “critically low” for 50 (76%) reviews, “low” for 7 (11%), “moderate” for 4 (6%), and “high” for 5 (8%). Cochrane reviews had higher methodological quality than non-Cochrane reviews. Methodological quality for Cochrane reviews ranged from “moderate” to “high”, while methodological quality for non-Cochrane reviews ranged from “critically low” to “low”. AMSTAR-2 individual items scores are described in Appendix A.

Abstracts reporting

The mean of the “total number of PRISMA-A fully reported items” was 4.9 ± 2.7 out of 12 for all reviews, with 4.1 ± 2.0 for non-Cochrane reviews and 9.9 ± 1.1 for Cochrane reviews. None of the reviews adhered to all 12 of the PRISMA-A items, and all reviews that adhered to 9, 10, or 11 items were Cochrane reviews (n = 8). The number and percentage of full reporting for each item of PRISMA-A is presented in Table 2. For the total sample, the items with the best reporting were the title (88%), objectives (74%), and information sources (52%). The items with the lowest reporting were eligibility criteria (3%), registration (20%), and synthesis of results (21%). With the exception of the risk of bias item, abstracts of Cochrane reviews had a higher percentage achievement of all items compared to abstracts of non-Cochrane reviews. The item with the largest absolute percentage difference between Cochrane and non-Cochrane reviews was registration.

Table 2.

Number and percentage of fully reported items from (PRISMA-A3) for the abstract section of the included reviews.

Item  Description  Total sample (n = 66)  Non-cochrane (n = 57)  Cochrane (n = 9) 
1. Title  Identify the report as a systematic review, meta-analysis, or both  57 (88%)  48 (84%)  9 (100%) 
2. Objectives  The research question including components such as participants, interventions, comparators, and outcomes  49 (74%)  40 (70%)  9 (100%) 
3. Eligibility criteria  Study and report characteristics used as criteria for inclusion  2 (3%)  0 (0%)  2 (22%) 
4. Information sources  Key databases searched and search dates  34 (52%)  25 (44%)  9 (100%) 
5. Risk of bias  Methods of assessing risk of bias  22 (33%)  19 (33%)  3 (33%) 
6. Included studies  Number and type of included studies and participants and relevant characteristics of studies  33 (50%)  24 (42%)  9 (100%) 
7. Synthesis of results  Results for main outcomes (benefits and harms), preferably indicating the number of studies and participants for each. If meta-analysis was done, include summary measures and confidence intervals  14 (21%)  6 (11%)  8 (89%) 
8. Description of the effect  Direction of the effect (i.e., which group is favored) and size of the effect in terms meaningful to clinicians and patients  22 (33%)  16 (28%)  6 (67%) 
9. Strengths and Limitations of evidence  Brief summary of strengths and limitations of evidence (e.g., inconsistency, imprecision, indirectness, or risk of bias, other supporting or conflicting evidence)  33 (50%)  24 (42%)  9 (100%) 
10. Interpretation  General interpretation of the results and important implications  19 (29%)  12 (21%)  7 (78%) 
11. Funding  Primary source of funding for the review  25 (38%)  16 (28%)  9 (100%) 
12. Registration  Registration number and registry name  13 (20%)  4 (7.0%)  9 (100%) 

Note: Data are described in n (%). PRISMA-A, Preferred Reporting Items for Systematic Reviews and Meta-Analyses for abstracts checklist.

Consistency of reporting comparing abstract to the full text

There was typically inconsistent reporting between abstract and full text with 5 PRISMA-A items having kappa values <0.20 (slight agreement); 3 items with kappa in the range 0.20−0.40 (fair agreement), 1 item in the range 0.40−0.60 (moderate agreement), and 2 in the range 0.60−0.80 (substantial agreement) (Table 3). Additionally, four of the nine Cochrane reviews were republished in other journals.34–37 The republished version of the reviews had lower methodological quality and lower “total number of PRISMA-A fully reported items” than the original Cochrane versions.

Table 3.

Consistency between abstracts and full texts for each item of the PRISMA-A.3 based on Kappa coefficient for 66 reviews.

Item  Kappa coefficient (95% CI) 
1. Title  *Not applicable 
2. Objectives  0.69 (0.50, 0.87) 
3. Eligibility criteria  0.05 (-0.02, 0.12) 
4. Information sources  0.07 (-0.02, 0.16) 
5. Risk of bias  0.15 (0.04, 0.25) 
6. Included studies  0.13 (-0.01, 0.27) 
7. Synthesis of results  0.48 (0.29, 0.66) 
8. Description of the effect  0.32 (0.16, 0.47) 
9. Strengths and Limitations of evidence  0.18 (0.04, 0.32) 
10. Interpretation  0.27 (0.11, 0.42) 
11. Funding  0.38 (0.22, 0.53) 
12. Registration  0.61 (0.42, 0.80) 

Note: Item 1 in the PRISMA-A3 was considered as ‘Not applicable’ because it is related to the title. Items shaded in grey are classified as being “consistent” (Kappa value 0.61 or higher). PRISMA-A, Preferred Reporting Items for Systematic Reviews and Meta-Analyses for abstracts checklist.

Associations between abstracts reporting and review and journal characteristics

Multicollinearity was not identified among the variables in the model. The multivariate model explained 72% of the variance in reporting of abstracts (“total number of PRISMA-A fully reported items”), as presented in Table 4. Higher reporting quality of abstracts was associated with higher JIF (ß 0.20; 95% CI: 0.06, 0.35), greater number of words in the abstract (ß 0.01; 95% CI: 0.00, 0.01), and higher review methodological quality (dummy variable ‘critically low’ with ß -3.06; 95% CI: −5.30, −0.82). Abstract reporting quality was not associated with journal endorsement of the PRISMA for full texts checklist,21 number of citations, and abstracts free of spin of information. In summary, this model explained that in most cases the “total number of PRISMA-A fully reported items” would be around 5.54 (constant) points out of 12. The “total number of PRISMA-A fully reported items” would increase 0.20 points for every additional point in terms of JIF, would increase 0.01 points for each additional word in the abstract, and would decrease 3.06 points for reviews with critically low methodological quality compared to a high methodological quality review.”

Table 4.

Multivariate model of associations between review characteristics and the “total number of PRISMA-A fully reported items” (n = 66).

  Multivariate Regression
Dependent variable  Adjusted R2 = 0.72
Total number of PRISMA-A fully reported itemsConstant = 5.54
95% CI = 2.46, 8.61
Independent variables  ß  95% CI  P 
PRISMA for full texts checklist21 endorsement  −0.74  −1.64, 0.17  0.11 
Number of citations  −0.04  −0.13, 0.05  0.38 
2017 Journal Impact Factor  0.20  0.06, 0.35  *0.01 
Number of words in the abstract  0.01  0.00, 0.01  *0.03 
Abstracts free of spin  0.79  −0.41, 1.99  0.19 
Methodological quality score       
High  Reference variable  Reference variable   
Moderate  0.06  −1.91, 2.04  0.95 
Low  −0.75  −3.31, 1.81  0.56 
Critically low  −3.06  −5.30, -0.82  *0.01 

Note: PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses; *Values with P <0.05; CI = Confidence Interval.

Discussion

Our sample of SRs evaluating physical therapy interventions for LBP exhibited low reporting quality of abstracts plus inconsistencies between the abstract and the full text. Abstracts with higher reporting quality were published in journals with higher impact factor, had greater number of words, and were reviews of higher methodological quality. Abstracts of Cochrane reviews had higher reporting quality and higher methodological quality compared to non-Cochrane reviews.

Abstracts of SRs of physical therapy interventions for LBP exhibited similar reporting quality as has been reported for other healthcare areas,24,38,39 with reporting being particularly poor for the treatment effects, harms, dates of search, assessment of risk of bias, and review registration. Our comparison of the abstracts of Cochrane and non-Cochrane reviews is also consistent with previous studies that indicated that Cochrane reviews have higher methodological quality30 and better reporting quality40 compared to non-Cochrane reviews. These differences may be due to the rigorous editorial processes and higher number of words permitted in the abstracts of Cochrane reviews.40

Reporting quality of abstracts could be improved by changing journal editorial policies regarding the abstract word count, use of the PRISMA-A checklist, and by providing training for editors, reviewers, and authors. We strongly encourage journal editors to increase the abstract word count8 (possibly to 400 words) and direct more attention to abstracts during the peer review process. Journal editors could be more flexible with authors exceeding abstract word count once all information needed is reported. We would encourage journals to ask authors to complete the PRISMA-A and PRISMA for full texts checklist21 and to encourage reviewers to use these checklists during the review process. Additionally, awareness of the PRISMA-A could be enhanced by adding a reference to the abstract section of the PRISMA for full texts,41,42 as occurs for the CONSORT for full text checklist.43 Universities could contribute to solutions by educating students about proper reporting in research studies and existence of research reporting checklists. Finally, we advise authors to write the abstract only after completing the full text to ensure better consistency between abstracts and corresponding full text.

Study strengths and limitations

The major strengths of our study were the tools used to evaluate reporting and methodological quality, which have been created by well-known and highly respected methodological research groups.3,23 On the other hand, our findings may only apply to SRs evaluating physical therapy interventions for LBP and may not be generalized to abstracts of SRs in general. Also, we searched the journals' instructions to authors between March and May 2018, which could probably have changed from when the authors of the analyzed reviews searched such journals' instructions.

Conclusion

The abstracts of SRs in physical therapy for LBP were poorly reported and inconsistent with the full text. The reporting quality of abstracts was higher in journals with a higher impact factor, in abstracts with a greater number of words, and when the review was of higher methodological quality. Our recommendations are targeted to journal editorial policies (including increasing the word count in abstracts, compulsory use of the PRISMA-A, and compulsory provision of training programs for editors and reviewers). For authors we suggest writing the abstract after the full text has been completed, to ensure better reporting and consistency between abstracts and corresponding full texts.

Conflicts of interest

The authors declare no conflicts of interest. To maintain transparency, supplementary raw data can be found at https://osf.io/a9gz2/.

Acknowledgements

This work was supported by the Sao Paulo Research Foundation (grant #2016/17853-4); Sao Paulo/SP, Brazil. The Foundation had no role in the study design, conduct, and reporting of the present work.

Appendix A
Supplementary data

The following are Supplementary data to this article:

References
[1]
R. Herbert, G. Jamtvedt, J. Mead, K.B. Hagen.
Practical evidence-based physiotherapy.
2nd edition, Elsevier Butterworth-Heinemann, (2011),
[2]
K. Chiu, Q. Grundy, L. Bero.
‘Spin’ in published biomedical literature: a methodological systematic review.
[3]
E.M. Beller, P.P. Glasziou, D.G. Altman, et al.
PRISMA for Abstracts: reporting systematic reviews in journal and conference abstracts.
PLoS Med, 10 (2013),
[4]
F. Gomez-Garcia, J. Ruano, M. Aguilar-Luque, et al.
Abstract analysis method facilitates filtering low-methodological quality and high-bias risk systematic reviews on psoriasis interventions.
BMC Med Res Methodol, 17 (2017), pp. 180
[5]
T.J. O’Donohoe, R. Dhillon, T.L. Bridson, J. Tee.
Reporting quality of systematic review abstracts published in leading neurosurgical journals: A research on research study.
Neurosurgery, 85 (2019), pp. 1-10
[6]
J.J. Bigna, L.N. Um, J.R. Nansseu.
A comparison of quality of abstracts of systematic reviews including meta-analysis of randomized controlled trials in high-impact general medicine journals before and after the publication of PRISMA extension for abstracts: a systematic review and meta-analysis.
[7]
Y. Assem, S. Adie, J. Tang, I.A. Harris.
The over-representation of significant p values in abstracts compared to corresponding full texts: a systematic review of surgical randomized trials.
Contemp Clin Trials Commun, 7 (2017), pp. 194-199
[8]
G. Li, L.P.F. Abbade, I. Nwosu, et al.
A scoping review of comparisons between abstracts and full reports in primary biomedical research.
BMC Med Res Methodol, 17 (2017), pp. 181
[9]
D.P. Nascimento, G.Z. Gonzalez, A.C. Araujo, A.M. Moseley, C.G. Maher, L.O.P. Costa.
Abstracts of low back pain systematic reviews presented spin and inconsistencies with the full text: an overview study.
J Orthop Sports Phys Ther, 50 (2020), pp. 17-23
[10]
D.P. Nascimento, L.O.P. Costa.
Spin of results in scientific articles might kill you.
Braz J Phys Ther, 23 (2019), pp. 365-366
[11]
M.O. Almeida, T.P. Yamato, P. Parreira, L.O.P. Costa, S. Kamper, B.T. Saragiotto.
Overall confidence in the results of systematic reviews on exercise therapy for chronic low back pain: a cross-sectional analysis using the assessing the Methodological Quality of Systematic Reviews (AMSTAR) 2 tool.
Braz J Phys Ther, 24 (2020), pp. 103-117
[12]
GBD.
2016 Disease and Injury Incidence and Prevalence Collaborators. Global, regional, and national incidence, prevalence, and years lived with disability for 328 diseases and injuries for 195 countries, 1990-2016: a systematic analysis for the Global Burden of Disease Study 2016.
Lancet, 390 (2017), pp. 1211-1259
[13]
G. Ferreira, L.M. Costa, A. Stein, J. Hartvigsen, R. Buchbinder, C.G. Maher.
Tackling low back pain in Brazil: a wake-up call.
Braz J Phys Ther, 23 (2019), pp. 189-195
[14]
J. Hartvigsen, M.J. Hancock, A. Kongsted, et al.
What low back pain is and why we need to pay attention.
Lancet, 391 (2018), pp. 2356-2367
[15]
C. Maher, M. Underwood, R. Buchbinder.
Non-specific low back pain.
Lancet, 389 (2017), pp. 736-747
[16]
L.D. Bardin, P. King, C.G. Maher.
Diagnostic triage for low back pain: a practical approach for primary care.
Med J Aust, 206 (2017), pp. 268-273
[17]
P.F. Beattie, S.P. Silfies, M. Jordon.
The evolving role of physical therapists in the long-term management of chronic low back pain: Longitudinal care using assisted self-management strategies.
Braz J Phys Ther, 20 (2016), pp. 580-591
[18]
I.A. Bernstein, Q. Malik, S. Carville, S. Ward.
Low back pain and sciatica: summary of NICE guidance.
BMJ, 356 (2017), pp. i6748
[19]
N.E. Foster, J.R. Anema, D. Cherkin, et al.
Prevention and treatment of low back pain: evidence, challenges, and promising directions.
Lancet, 391 (2018), pp. 2368-2383
[20]
Z.A. Michaleff, L.O. Costa, A.M. Moseley, et al.
CENTRAL, PEDro, PubMed, and EMBASE are the most comprehensive databases indexing randomized controlled trials of physical therapy interventions.
Phys Ther, 91 (2011), pp. 190-197
[21]
A. Liberati, D.G. Altman, J. Tetzlaff, et al.
The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration.
BMJ, 339 (2009), pp. b2700
[22]
J.J. Bigna, J.J. Noubiap, S.L. Asangbeh, et al.
Abstracts reporting of HIV/AIDS randomized controlled trials in general medicine and infectious diseases journals: completeness to date and improvement in the quality since CONSORT extension for abstracts.
BMC Med Res Methodol, 16 (2016), pp. 138
[23]
B.J. Shea, B.C. Reeves, G. Wells, et al.
AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both.
BMJ, 358 (2017), pp. j4008
[24]
J. Kiriakou, N. Pandis, P.S. Fleming, P. Madianos, A. Polychronopoulou.
Reporting quality of systematic review abstracts in leading oral implantology journals.
J Dent, 41 (2013), pp. 1181-1187
[25]
D.B. Rice, L.A. Kloda, I. Shrier, B.D. Thombs.
Reporting quality in abstracts of meta-analyses of depression screening tool accuracy: a review of systematic reviews and meta-analyses.
[26]
S.Y. Song, B. Kim, I. Kim, et al.
Assessing reporting quality of randomized controlled trial abstracts in psychiatry: adherence to CONSORT for abstracts: a systematic review.
[27]
A. Yavchitz, P. Ravaud, D.G. Altman, et al.
A new classification of spin in systematic reviews and meta-analyses was developed and ranked according to the severity.
J Clin Epidemiol, 75 (2016), pp. 56-65
[28]
PRISMA Endorsers. Preferred Reporting Items for Systematic Reviews and Meta-Analyses. PRISMA Endorsers. 2015; http://www.prisma-statement.org/Endorsement/PRISMAEndorsers.aspx. Accessed 5th November 2018.
[29]
R. Haneef, A. Yavchitz, P. Ravaud, et al.
Interpretation of health news items reported with or without spin: protocol for a prospective meta-analysis of 16 randomised controlled trials.
[30]
A.M. Moseley, M.R. Elkins, R.D. Herbert, C.G. Maher, C. Sherrington.
Cochrane reviews used more rigorous methods than non-Cochrane reviews: survey of systematic reviews in physiotherapy.
J Clin Epidemiol, 62 (2009), pp. 1021-1030
[31]
A.J. Viera, J.M. Garrett.
Understanding interobserver agreement: the Kappa statistic.
Fam Med, 37 (2005), pp. 360-363
[32]
Z. Zhou, H.C. Ku, G. Xing, C. Xing.
Decomposing Pearson’s Chi2 test: a linear regression and its departure from linearity.
Ann Hum Genet, 82 (2018), pp. 318-324
[33]
J.G. Prunier, M. Colyn, X. Legendre, K.F. Nimon, M.C. Flamand.
Multicollinearity in spatial genetics: separating the wheat from the chaff using commonality analyses.
Mol Ecol, 24 (2015), pp. 263-283
[34]
H. Franke, G. Fryer, R.W. Ostelo, S.J. Kamper.
Muscle energy technique for non-specific low-back pain.
Cochrane Database Syst Rev, (2015),
[35]
B.T. Saragiotto, C.G. Maher, T.P. Yamato, et al.
Motor control exercise for nonspecific low back pain: a cochrane review.
Spine (Phila Pa 1976)., 41 (2016), pp. 1284-1295
[36]
T.P. Yamato, C.G. Maher, B.T. Saragiotto, et al.
Pilates for low back pain: complete republication of a cochrane review.
Spine (Phila Pa 1976), 41 (2016), pp. 1013-1021
[37]
F. Zaina, C. Tomkins-Lane, E. Carragee, S. Negrini.
Surgical versus nonsurgical treatment for lumbar spinal stenosis.
Spine (Phila Pa 1976)., 41 (2016), pp. E857-868
[38]
S. Hopewell, I. Boutron, D.G. Altman, P. Ravaud.
Deficiencies in the publication and reporting of the results of systematic reviews presented at scientific medical conferences.
J Clin Epidemiol, 68 (2015), pp. 1488-1495
[39]
J. Seehra, P.S. Fleming, A. Polychronopoulou, N. Pandis.
Reporting completeness of abstracts of systematic reviews published in leading dental specialty journals.
Eur J Oral Sci, 121 (2013), pp. 57-62
[40]
M.J. Page, L. Shamseer, D.G. Altman, et al.
Epidemiology and reporting characteristics of systematic reviews of biomedical research: a cross-sectional study.
[41]
D. Moher, A. Liberati, J. Tetzlaff, D.G. Altman, P. Group.
Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement.
Int J Surg, 8 (2010), pp. 336-341
[42]
D. Moher, A. Liberati, J. Tetzlaff, D.G. Altman, P. Group.
Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement.
PLoS Med, 6 (2009),
[43]
K.F. Schulz, D.G. Altman, D. Moher.
Group C. CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials.
BMJ, 340 (2010),
Copyright © 2020. Associação Brasileira de Pesquisa e Pós-Graduação em Fisioterapia
Idiomas
Brazilian Journal of Physical Therapy
Article options
Tools
Supplemental materials
en pt
Cookies policy Política de cookies
To improve our services and products, we use "cookies" (own or third parties authorized) to show advertising related to client preferences through the analyses of navigation customer behavior. Continuing navigation will be considered as acceptance of this use. You can change the settings or obtain more information by clicking here. Utilizamos cookies próprios e de terceiros para melhorar nossos serviços e mostrar publicidade relacionada às suas preferências, analisando seus hábitos de navegação. Se continuar a navegar, consideramos que aceita o seu uso. Você pode alterar a configuração ou obter mais informações aqui.