An old trick was to take a polygon hair prop with a higher transparency/alpha then make a duplicate, maybe scale it up a fraction, offset it bit and then you could get a thicker final mesh to render. This gave me the idea to make a simple new plugin to Thicken out the results of the simulations. It might also give a more realistic render and rely less upon painting in the layers of hair into the texture map.
For the Thicken beta version 0.1 I’ve allowed for a single additional layer with a surface offset, xyz displacement and a scaling factor. I’ll probably be adding a second layer and second column of settings. I don’t think I’ll add multiple layers because that will make for a very tricky UI. I’ve also considered giving the layers their own shading domain but I’m not sure if that is safe yet – Carrara can cope with it when you add and remove them in the vertex modeller so it should be possible.
The Thicken plugin can of course be used to conveniently add a surface offset or indent a mesh by telling it to hide the original mesh.
I used it in the render of this cloth hair simulation. There are only a few artefacts where the layers of hair went through each other, before the thickening was applied – you can see these on the left shoulder. This next attempt at dynamic cloth hair used twice as many layers as the last one, with 5 down the side of the head and two more at the back. The self collisions performed nearly perfectly but it took more than 12 hours to run when draping over the figure in a 5 second animation sequence. I also improved the alpha map technique by using 2 pixel thick strokes to paint the tips of the hair.
After a few drafts I’m getting encouraging results with dynamic hair. This symmetrical hair was modelled lock by lock, each one layered over the other. The roots of the hair fully conform. From the side of the head down till below the ears is the falloff area then right down to the tips the hair is all dynamic. It’s important to get the initial style right where the hair does conform because it will try to return to that shape during the simulation.
Improvements are needed in the number of layers of hair. It needs another layer at the side of the head and one or more lower at the back of the head below the ears. There should also be more variety in the locks – I duplicated and moved the same ones at the side of the head for each layer to save time. The texture and transparency maps need more effort and care to paint with higher contrast in the strands and cleaner detail for the alpha on the tips. I also made a scalp prop to fit underneath based on a copy of the figure’s head geometry.
I have shown animations before with dynamic cloth hair but these did not have self collisions or enough length to drape over the costume/figure. This example used a large sphere on an angle to cover the whole head and the ears and a capsule for the neck. The collisions over the costume required the modified code which can recorded and store simulations at a higher fps.
While working on this I found that the conforming falloff feature of my plugin wasn’t working quite how I wanted or expected. It’s supposed to falloff gradually from fully conforming to fully dynamic with a painted map or zone or both – but there was not much ‘falloff’ apparent. I made a slight change to the simulation code to improve this. My idea is that if the falloff for a vertex is 50% then it will be pulled back half way from where the physics take it to the conforming position. The conform rate value is now used as a speed limit instead. The conforming force is still very strong but the falloff now produces a smoother transition. Collisions overrule this so I will need to experiment a bit more. I want this falloff because in cloth simulators that have an on/off pixel constrain selection this creates really obvious hard bends along that dynamic edge.
I made another quite simple but important change to the plugin code and that was to add in a self-collision margin. In my simulator each vertex of the cloth mesh is treated as a particle with a thickness. Unless the user overides that thickness all of the vertex particles are thick enough so that they just touch each other. That is one reason why a regular mesh is very important. The new margin value allows for thinner vertices and now the cloth folds can get much closer. The hair’s need to have self collisions become obvious because without this new change the dynamic locks would pass through and intersect with each other. This has a big impact on the design and modelling the style because the hair must start the simulation without being tangled.
Another problem with my simulator has become apparent when trying to run a layered simulation where one cloth items drapes over another.
My plugin is a deformer and not a physics solver. The accurate simulations are run by getting Carrara to slowly advance the time sequencer. The current tests I’ve been doing included attempts to combine dynamic hair with a dynamic costume – where the hair would drape as a second layer over the costume. The simulation is normally saved at the scene’s frame rate, so when the higher fps was used on the hair the costume would not move in slow steps but jump to the next set of stored data only when at a time multiple of the scene frame rate.
A number of separate cloth mesh objects can go into the one simulation but hair and different layers and different types of cloth will need different properties.
To get this to work better I set both the dress and the long hair into record mode and then ran the simulation from one of them to get the high frame rate. This caused an immediate crash and an impossible to find bug. I had to give up trying to find how and where it was happening because no single specific part of the code was causing it. I believe that there is no easy way to fix it and the code is not ‘thread safe’. Therefore only one simulation should be set to record and run at any time.
The solution for the layered cloth is simple enough. A new setting and change to the plugin is required so that the costume simulation can be run first and all of the frames are then saved. If the simulation is set to run at 150fps with a scene rate of 25fps then all 150 snap-shots of the cloth will be saved per second for use in the next simulation that layers over it. This does make for a much larger file size so some kind of management of the stored data to discard the extra frames could also be added.
The next release of the cloth plugin will have this high definition recording and playback feature. I will also need to add a similar feature to the Jiggle plugin for it to work consistently with my cloth. It might also help when combining cloth with the Carrara physics solver or strand based hair. I have not tested that yet.