We have a little less than a week until the soft "pencils down" deadline. In theory, we are supposed to spend the remaining time after that point doing cleanup, documentation, etc. In practice, that will hardly be the case. The firm "pencils down" deadline is two weeks away. I'm on track to have the basic morph completed by Thursday. Then I will spend the weekend figuring out a basic keyframe pattern using atomics. Then the final week will be spent working on Mesh itself -- e.g. on rendering and optimization, which still has not been done.

At first our thinking was that a morph target is a type of model. One would render the target instead of rendering the original mesh. That idea is incorrect because it makes it difficult to apply multiple morphs to the same mesh.

The direction I've been going is this: you create a morph atom, which is calculated as the difference between two meshes. Mesh has to honor a public array of vertices as it is (even though it will be performing optimizations behind the scenes), so we assume the public ordering is valid for both methods. This means for any given vertex, mesh A contains that vertex's position, normal, etc. when the morph is at 0.0, and mesh B contains that information when the morph is at 1.0.

Then you can specify a delta, between 0.0 and 1.0, and the atom computes the vertex interpolation. Then you bind the morph atom (which is basically a matrix of values to be added) to the original mesh. This means that rendering is done on the original mesh, rather than a target model, and multiple morph atoms can be added to a single mesh.p

Animation is a little more difficult problem, but expressing morph as an atomic makes it more tractable. I'm thinking the morph atom will have an optional property (some sort of soy.atoms.Keyframe), which basically expresses what the transformation matrix will look like at some point in the future. Then as we step through the thread, we compute the matrix by multiplying the delta by the ratio between the current time and the keyframe. This could get costly if we are generating new objects every time, but it should work, especially if we have an OpenGL VBO doing the actual work of applying the matrices for us.

At first our thinking was that a morph target is a type of model. One would render the target instead of rendering the original mesh. That idea is incorrect because it makes it difficult to apply multiple morphs to the same mesh.

The direction I've been going is this: you create a morph atom, which is calculated as the difference between two meshes. Mesh has to honor a public array of vertices as it is (even though it will be performing optimizations behind the scenes), so we assume the public ordering is valid for both methods. This means for any given vertex, mesh A contains that vertex's position, normal, etc. when the morph is at 0.0, and mesh B contains that information when the morph is at 1.0.

Then you can specify a delta, between 0.0 and 1.0, and the atom computes the vertex interpolation. Then you bind the morph atom (which is basically a matrix of values to be added) to the original mesh. This means that rendering is done on the original mesh, rather than a target model, and multiple morph atoms can be added to a single mesh.p

Animation is a little more difficult problem, but expressing morph as an atomic makes it more tractable. I'm thinking the morph atom will have an optional property (some sort of soy.atoms.Keyframe), which basically expresses what the transformation matrix will look like at some point in the future. Then as we step through the thread, we compute the matrix by multiplying the delta by the ratio between the current time and the keyframe. This could get costly if we are generating new objects every time, but it should work, especially if we have an OpenGL VBO doing the actual work of applying the matrices for us.

## No comments:

## Post a Comment