Physics - Relaxation Methods

In this model, the position of each vertex is recalculated for each frame. Its position is the result of two competing sets of forces:

  1. The vertices all repel each other according to the inverse square law (regardless of whether they are connected by edges)
  2. Each edge tries to move toward its nominal length.

Eventually, under the action of these forces the object (blob) will come to an equilibrium and its shape will stabilise. If there are any changes, for instance another object comes close, then the equilibrium will be disturbed and the shape will change.

An object to be modeled is made up of a number of vertex points, these vertices are connected to other vertexes by edges. Edges have a length, but the actual length can vary under the action of the above forces, so think of edges as bonds or springy struts.

So to define the body, all we need is a list of edges, for each edge we specify the vertex at each end and a nominal length. But if we define the 'blob' in this way there are problems:

To get round these problems, blobBean has been implemented as an extension of ifsBean. The object can then be rendered as a geometry array object with vertices of the geometry generated every frame from the coordinates calculated by the relax method. The advantage of this is that the initial positions of the vertexes are set before the forces are calculated, so at least we know the starting point. The edge lengths can be calculated from these initial positions, so this way, there is no need to specify the edge lengths.

I found it useful to add two new parameters to the IndexedFaceSet, nodeIndex and Edges. This is so that we can specify which nodes and edges take part in the calculations, so that some parts of the shape can be rigid and other parts can move. The other advantage is that we can have vertices which are not used for rendering but for structure, for example, some of the 'vertices' might be in the middle of the body for mechanical stability. Without the extra parameters it could be very difficult working out which of the vertices are needed for the rendering model.

This seems similar to modeling the atoms and the bonds between them. Modeling every atom in a body might require more computing power than is likely to be available, but why not have one vertex to every million, or every billion atoms?


metadata block
see also:
Correspondence about this page

Book Shop - Further reading.

Where I can, I have put links to Amazon for books that are relevant to the subject, click on the appropriate country flag to get more details of the book or to buy it from them.

 

cover 3d Games: Physics -

 

Commercial Software Shop

Where I can, I have put links to Amazon for commercial software, not directly related to the software project, but related to the subject being discussed, click on the appropriate country flag to get more details of the software or to buy it from them.

 

This site may have errors. Don't use for critical systems.

Copyright (c) 1998-2023 Martin John Baker - All rights reserved - privacy policy.