I'm working on a structural analysis problem using a Graph Neural Network (GNN). My model is unsupervised and physics-informed, meaning it does not use any labeled data such as displacement or force results from simulations. Instead, it learns directly from a single truss configuration, using physical laws (like force equilibrium and internal force calculations) as the basis for its custom loss function.

The issue arises when I increase the number of members in the truss: The loss function fails to decrease, and the model does not converge.

This method works for small trusses, but struggles as the truss gets more complex (more members). I suspect it might relate to scaling issues, the expressiveness of the GNN, or the formulation of the loss function.

I’m looking for input on:

  • Why this kind of physics-informed model might fail to converge on larger trusses
  • Whether others have encountered similar scaling or convergence issues in unsupervised GNNs
  • Potential improvements to the architecture or training procedure to help with convergence

If you’ve worked on similar graph-based structural analysis models or have experience with unsupervised physics-informed learning, I’d appreciate your insights!

More Ali Mosalli's questions See All
Similar questions and discussions