After all the advances in bioinformatics in protein modelling, it feels like scientist still do not trust most of the data generated using in-silico techniques.
A model is a hypothesis concerning the structure of a protein. Like any hypothesis it is only valuable if it leads to predictions that can be verified experimentally. In the Plückthun lab, we use in-silico design in close conjunction with in-vitro evolution and experimental verification. In this context, in-silico design is extremely valuable.
A model is a hypothesis concerning the structure of a protein. Like any hypothesis it is only valuable if it leads to predictions that can be verified experimentally. In the Plückthun lab, we use in-silico design in close conjunction with in-vitro evolution and experimental verification. In this context, in-silico design is extremely valuable.
In many cases, it can give us the good candidates. But, if only prediction, w/o further verification, it will be useful for making a paper, but nothing for research.
Hai Dr.Miguel de abreu , your post really attracts me. I hope your question will get more interesting comments.
To answer your question I would tell yes. The in silico studies are much helpful to the research society if you have enough data. The researcher working on the hypothetical proteins or a protein with lack of sufficient data or structure will really face a difficult task. In this case, we compare the prediction we have with experimental data. Same way, the in silico studies are much helpful to support the experimental data too.
A in silico result generated from less evidence is just a hint for other researchers.
In a number of projects, I have been able to use protein modeling to sufficiently stabilize/improve the folding efficiency of a protein. This enabled us to produce sufficient amountsof the protein to either obtain an X-ray or NMR structure.
People are so eager to forget that the DNA structure itself was discovered through modeling, only *using* available experimental data as input. It should be interpreted as computational / structural modeling as an primary approach that can lead to valuable inferences from experimental data. It also highlights the primacy of thought experiments.
The DNA structure is a prime example of what a model should do: It yielded a plausible explanation for well-studies puzzling experimental findings, such as Chagaff's rules :
A, T, C, and G were not found in equal quantities (as some models at the time would have predicted)
The amounts of the bases varied among species, but not between individuals of the same species
The amount of A always equalled the amount of T, and the amount of C always equalled the amount of G (A = T and G = C)
and semi-conservative replication of DNA, it fitted Rosalind Franklin's X-ray pattern, which suggested a two stranded, helical pattern from which the overall size could be deduced. It has since be well verified by X-ray crystallography, although other conformations have been shown to exist for particular sequences and environmental conditions, such as A and Z-DNA, as well as DNA quadruplexes.
The prime advantage Watson and Crick had over competing players, such as Linus Pauling, was access to Rosalind Franklin's data, which allowed them to identify the correct model, although the way they obtained this data was questionable.
I think it need "dry" and “wet” combine. The dry is theroy computation, such as molecular dynamics simulations and molecular docking and et al. The wet is the experimental results.
In silico protein modeling is indeed a boon, in terms of saving time, resources and carrying out cutting edge research studies such as drug discovery, vaccine development etc. Besides, many works being done in the area show that the results obtained from this approach are quite reliable. But obtaining a result that has 100% resemblance to the in vitro or in vivo results hasn't been recorded yet.
There are multiple issues yet to be addressed in this context. Improving the models by shifting the coordinates parallel to the native state, for instance, is one key factor during protein modelling that has to be worked on, and this can be achieved by developing more accurate methods. Such kind of issues are the ones, that make some scientists to still be uncertain regarding the generated data. But as Annemarie Honegger and SAMEE ULLAH
have stated in their responses, an amalgamation of both dry lab and wet lab, wherein the studies done using computational approaches are to be replicated into lab work for their validation, could yield much more reliable results.
Nevertheless, there's nothing to complain, but we gotta wait for little more improvements and advancements to happen.