in PET imaging the suitable method to calculate the uptakes must take account Partial Volume Effect (PVE) and the heterogeneity of uptake in the lesion. If your goal is to segment the lesions, I propose you to use 3-D LARW. You can download my paper about this method on my page.
The silly answer is that the values of the PET image already represent the uptake (usually on KBq/ml or similar units). SPECT quantification is a bit more complicated.
If it is clinical FDG data I suggest you use SUV, since I it the most wide option and it will be the most accepted. Of course, to choose the proper SUV normalization you should take into account the type of area you are measuring (Is it small, it is surrounded by low or high activity etc...). Also It would be good to know what are you trying to achieve (repeatability, precision, accuracy?¿?) to recommend you the best method.
You can take a quick look to the attached publication that describe some benefits and counterparts of the different common normalization options.
In any case, the question is a bit open, so if you give more details about the project itself we would be able to provide a better answer.
Regards,
Article Simulated FDG-PET studies for the assessment of SUV quantifi...
Even though SUV max is widely used, other segmentation methods are under investigation. We found that MTV might be one of the alternative and give more comprehensive quantitative information. One of my colleague has a recent article about this:
For SPECT, many of the the reconstruction algorithms are designed to make pretty pictures rather than quantitative ones. Avoid those. Do a search on "quantitative SPECT reconstruction" on Google to see the variety. Which algorithm is chosen depends a bit on what equipment you have, for example, with or without CT attenuation correction.
With CT reconstruction there is a general problem. Typically, the reconstructed field of view of the CT images is smaller than the bore of the imaging device. Thus, the CT reconstructed field is inside the patient if the patient is large. The solution to this problem would be to do limited angle reconstruction over a larger matrix than the bore of the imaging device, but, I have never seen that done. So, CT correction attenuation correction sometimes uses methods in an attempt to overcome the original sin of doing things, well, stupidly. This could include, for example, Hilbert space reconstruction to follow an attenuation reconstruction pathway that includes only those portions of the patient that are totally within of the field of view. As a consequence of doing things stupidly, attenuation correction coefficients seem sometimes (e.g., GE CT-SPECT of a few years ago) to be set up so that patients totally within the field of view are over corrected for attenuation and those who are partly outside the field of view are correctly attenuation corrected and those who have massive body size are under-corrected.
Without CT, some reconstructive algorithms are more correct than others, for example Review Article: Filtering in SPECT Image Reconstruction. Maria Lyra and Agapi Ploussi, Think the Butterworth filter is superior for quantification, and opinions vary, and iterative techniques are sometimes better. Unfortunately, this is a matter of what is available on your equipment, in your setting. Given that choice, which is always a subset of all possibilities, there will be one better method for quantification where most of the methods available will be dedicated to making pretty pictures with little thought to doing things to maximize quantitative fidelity. So, you have to think it through for the equipment you have, and doing phantom studies may be necessary to get a handle on what is less ridiculous, but even that should only be done after doing a literature search on those methods of image reconstruction that you do have available. But, even phantom studies have to be done in such a way as to correspond to actual patient data, for example, think of the CT problems for patient size above.
If you have DICOM data, I would suggest you download the DICOM reading tool Osirix. It is free but only runs on Mac iOS. In Osirix you can read SUV and activity in ROIs and VOIs you segment yourself. It is pretty easy and straight forward and Osirix is good software that can also be used for Clincial reading.
If you want to use Matlab, there are DICOM read and write tools for that, but to make sure you get the right SUV or activity you need to now what you are doing and as a minimum check the values you get against tools like Osirix, Mirada, Syngo.via, EBW etc.
You can also use packages such as minc for reading and advanced processing but if you want to do advanced processing, you can also write plugins for Osirix.
PET and to some degree SPECT are quiantitative modalities, but there is lot of discussions about how to do it right and much work on standardization. For guidelines on this I would suggest you go to the organizations of nuclear imaging, EANM and SNMMI, check their home pages and their journals (EJNMMI and JNM).
Saeed, in humans or animals? For animals, probably the best is presenting the uptake as a percent of the injected dose per gram. For humans the SUV values are commonly used. There are some free software you can use such as Vinci (http://www.nf.mpg.de/vinci/AboutVinci.html) or Amide (http://amide.sourceforge.net). You can also use PMOD (http://www.pmod.com/) but its not free and its not approved for clinical use. In general, every vendor provides its own software for reconstruction/quantification.
I have used Vinci as well and is good for DICOM if you only have a few data sets. For larger DICOM data sets Osirix is better as it has a database as well.
But for odd data formats and phantom data ets Vinci is by far the best tool.