Best practices: Difference between revisions
From SpinozaWiki
No edit summary |
No edit summary |
||
Line 2: | Line 2: | ||
==1. Quality assessment (QA)== | ==1. Quality assessment (QA)== | ||
Always, '''always''', '''ALWAYS''' check the quality of your MRI data. After data-acquisition of every participant. After every analysis step. | |||
The data-acquisition and data-analysis protocols are too complex to just check the code, scripts or parameters. It is not just difficult to check the code or parameters by eye, it is '''impossible'''. | |||
You need to check the quality '''yourself'''. There are many ways to do this. Several automatic programs exist. But the best way is to visualise the data and use your common sense. If it fails, it typically fails spectacularly. Brains that do not look like brains any more, signal dropouts in certain regions. | |||
If you have any concerns, please contact us. | |||
Revision as of 12:57, 19 November 2020
Set of guidelines strongly recommended for MRI data-analyses. Our recommendations are based on our experiences and our user experiences, and hold for all types of MRI data, including functional, structural, diffusion and spectroscopy whether in living or deceased humans, or non-humans species.
1. Quality assessment (QA)
Always, always, ALWAYS check the quality of your MRI data. After data-acquisition of every participant. After every analysis step.
The data-acquisition and data-analysis protocols are too complex to just check the code, scripts or parameters. It is not just difficult to check the code or parameters by eye, it is impossible.
You need to check the quality yourself. There are many ways to do this. Several automatic programs exist. But the best way is to visualise the data and use your common sense. If it fails, it typically fails spectacularly. Brains that do not look like brains any more, signal dropouts in certain regions.
If you have any concerns, please contact us.
2. See point 1
Please reread our first recommendation. It's no joke.
We know of studies, that have collected data from 30 to 60 or more participants only to find out the data-collection was wrong or contained artifacts. The data turned out to be useless.
Certainly, you can wait with full analysis after you collected all participants, but check the quality after data-collection and after every analysis step.