Best practices: Difference between revisions
From SpinozaWiki
Line 28: | Line 28: | ||
* Sounds simple but we know of studies, that have collected data from 30 to 60 or more participants only to find out the data-collection was wrong or contained artifacts. The data turned out to be useless. | * Sounds simple but we know of studies, that have collected data from 30 to 60 or more participants only to find out the data-collection was wrong or contained artifacts. The data turned out to be useless. | ||
* In the literature there are several studies which turned out to be invalid after the analysis turned out to be incorrect, even major software packages that have made mistakes in their implementation of analyses. At some point, we will list some here. Invalid data-analyses can be corrected | * In the literature there are several studies which turned out to be invalid after the analysis turned out to be incorrect, even major software packages that have made mistakes in their implementation of analyses. At some point, we will list some here. Invalid data-analyses can be corrected post hoc, but invalid publications are embarrassing to say the least. | ||
Revision as of 14:41, 3 November 2021
Set of guidelines strongly recommended for MRI data-analyses. Our recommendations are based on our experiences and our user experiences, and hold for all types of MRI data, including functional, structural, diffusion and spectroscopy whether in living or deceased humans, or non-humans species.
1. Quality assessment (QA)
When?
Always, always, ALWAYS, check the quality of your MRI data. Period. No exceptions.
The data-acquisition and data-analysis protocols are too complex to just check the code, scripts or parameters. It is not just difficult to check the code or parameters by eye, it is impossible.
If the scanner ran correctly yesterday, it doesn't mean it will do so forever. Scanners, hardware, software have limited lifetime. So the question is not whether it will break down but when.
Thus QA must be performed:
- After every data-acquisition: Typically QA will take 5-30 minutes. If you have done a few subjects, you are able to judge whether something is unusual for the next few subjects.
- After every analysis step: When automatic algorithms fail, they tend to fail spectacularly, making it easy to notice. Don't assume that when it has correctly run once it will do so forever. Most softwares allow to quickly visualise the results, or automatically ouput some images to check.
- When you observe an unexpected or unusual result. It may be true or it may be a result of something that has gone wrong during the analysis. Don't immediately assume it is a true result and not an artefact.
Who?
You need to check the quality yourself. We check things too but cannot check every sequence, session, participant or patient every time.
There are many ways to check the quality. Several automatic programs exist. But the best way is to visualise the data and use your common sense. If it fails, it typically fails spectacularly. Brains that do not look like brains any more, signal dropouts in certain regions, etc.
We are happy to help. If you have any concerns, please contact us.
2. See point 1
Please reread our first recommendation. It's no joke.
- Sounds simple but we know of studies, that have collected data from 30 to 60 or more participants only to find out the data-collection was wrong or contained artifacts. The data turned out to be useless.
- In the literature there are several studies which turned out to be invalid after the analysis turned out to be incorrect, even major software packages that have made mistakes in their implementation of analyses. At some point, we will list some here. Invalid data-analyses can be corrected post hoc, but invalid publications are embarrassing to say the least.
Don't let it happen to you.
Certainly, you can wait with full analysis after you collected all participants, but check the quality after data-collection and after every analysis step.