Tuesday, August 16, 2011

Less than a week

With less than a week the firm "pencils down" date, I'm feeling a little disappointed in where my project is. I'm at the point where mesh morphs are operational (albeit buggy), with what I consider is a solid and simple API.

 mesh = soy.models.Mesh()
 mesh.size = 6
 mesh[0] = face //a soy.atoms.Face object
 // repeat for mesh[1] through mesh[5]
 clone = mesh.clone()
 //clone is a Mesh object that can be rendered in its own right, if it is bound to a body
 //change the face and vertex data for clone[0] through clone[5]
 target = soy.models.Target(mesh)
 morph = mesh.morph(clone,0.5) //mesh.morph(variantMesh,delta) spins off a soy.atoms.Morph object
 target.morphs.add(morph)
 //now you bind target to a soy.bodies.Body, and when its render() method is called, it will apply all its morphs at their given deltas

Rendering has not been done yet for Mesh, and this process will be complicated on the back-end once we perform optimization. Basically we have to maintain the public vertex ordering while shifting vertices around on the backend so that OpenGL can render faces that have the same material consecutively (having to switch between materials needlessly is costly). This is already done for getters and setters for Mesh, but not honored by clone(), soy.atoms.Morph, or by the Target constructor.

Odds are this process won't even be kind of complete by pencils down, but I would expect something more fully functional by PyCon.

Tuesday, August 9, 2011

One week to "pencils down"

We have a little less than a week until the soft "pencils down" deadline. In theory, we are supposed to spend the remaining time after that point doing cleanup, documentation, etc. In practice, that will hardly be the case. The firm "pencils down" deadline is two weeks away. I'm on track to have the basic morph completed by Thursday. Then I will spend the weekend figuring out a basic keyframe pattern using atomics. Then the final week will be spent working on Mesh itself -- e.g. on rendering and optimization, which still has not been done.
At first our thinking was that a morph target is a type of model. One would render the target instead of rendering the original mesh. That idea is incorrect because it makes it difficult to apply multiple morphs to the same mesh.
The direction I've been going is this: you create a morph atom, which is calculated as the difference between two meshes. Mesh has to honor a public array of vertices as it is (even though it will be performing optimizations behind the scenes), so we assume the public ordering is valid for both methods. This means for any given vertex, mesh A contains that vertex's position, normal, etc. when the morph is at 0.0, and mesh B contains that information when the morph is at 1.0.
Then you can specify a delta, between 0.0 and 1.0, and the atom computes the vertex interpolation. Then you bind the morph atom (which is basically a matrix of values to be added) to the original mesh. This means that rendering is done on the original mesh, rather than a target model, and multiple morph atoms can be added to a single mesh.p
Animation is a little more difficult problem, but expressing morph as an atomic makes it more tractable. I'm thinking the morph atom will have an optional property (some sort of soy.atoms.Keyframe), which basically expresses what the transformation matrix will look like at some point in the future. Then as we step through the thread, we compute the matrix by multiplying the delta by the ratio between the current time and the keyframe. This could get costly if we are generating new objects every time, but it should work, especially if we have an OpenGL VBO doing the actual work of applying the matrices for us.