[Chimera-users] features/question: stereo3d

Tom Goddard goddard at sonic.net
Wed Nov 27 17:32:18 PST 2013


Hi Matt,

  It is hard to control stereo parameters during an animation in Chimera currently.  Perhaps as a start you should know there are only 2 independent stereo viewing parameters in Chimera: left right camera separation, and focal plane distance from camera.  These describe the position of the virtual "eyes" and "screen" in the molecular scene.  In my previous email I talked about two parameters eye separation and screen width (both in inches or millimeters) but those only define one parameter obtained by dividing the two giving the camera separation as a fraction of the visible width on the focal plane.  The reason we use two parameters for this is so you don't have to directly specify camera separation in Angstroms in the molecular scene.

  So you need just two knobs to control the stereo settings: left right camera separation and focal plane distance.  You also have angular field of view but I don't include that one as a "stereo" parameter because it is a mono viewing parameter.  Likewise the position of the mono camera or mid-point position of left/right eye cameras in the scene is not specifically a "stereo" parameter since it is used for mono viewing too.  Camera separation and focal plane distance are only relevant to stereo, two-camera viewing and they completely describe the Chimera stereo geometry.

  There is not a Chimera command to set the camera separation or focal plane distance although we can add those options to the "stereo" command if you need them.  The settings can currently be made using Python.

	Tom


On Nov 27, 2013, at 3:41 PM, "Dougherty, Matthew T"  wrote:

> Hi Tom,
> 
> The more I am digging into S3D the more I am realizing this is not a solved subject and there several approaches & philosophies.  
> 
> Several things seem to have consensus: 
> a) avoid divergence, it is the major cause of eye strain.
> b) interocular distance is not interaxial distance; IOD is a single example for IAD frequently used to illustrate the S3D concept, for most S3D work IOD is not used for IAD.
> c) avoid positive parallax greater than one, avoid negative parallax greater than two; but visual events can have a negative parallax greater than six.
> d) continuity of stereopsis throughout the visual experience; which means the cinematographer is adjusting widely the parallax and field of view, including frequent changes to 
> e) anticipate exceptions; experiment often to figure out the experience space; solicit opinions, working with S3D usually means developing a high tolerance for extreme S3D which can be bad for some audiences.  Stereopsis is a primordial visual reflex, badly done can make people physically sick.
> 
> Consider visualizations that have multiple IAD and have to interpolated IAD values between positions in order to maintain continuity. 
> 
> When dealing with non-CG imagery it gets more complicated, particularly when integrated with CG imagery, due to parallax & scaling mismatches. Because we are dealing with objects that have no parallel, we have more leeway.  If a structure is not exactly linear it is hardly noticeable; do it with a person's face and it is immediately objectionable.
> 
> I would avoid locking into screen/pixel size, or at least be able to override the auto adjustment.  For what I am doing now, screen size value has been constant.  But if I do display-mirroring or put an image on a 4k then a HD 48 inch display it opens me up to shifting sands if values are changing beyond my control during the session.  This will cause continuity breaks and heavy distortions. 
> 
> Amira/Avizo uses a slightly different S3D approach.  SGI/NAG Explorer used a different approach.  Until intuitive physical controls are created that operate like binocular controls, I do not expect major use.  I probably will dedicate a single contour shuttle express to handle these parameters for interactive applications.  Being able to access all the opengl S3D values might be best long-term solution.  We should keep talking about this for the next version of chimera.
> 
> For the immediate problem, it seems like the best solution is to encode the values I need into a python/command file before I start the animation, or worst hack the code.
> 
> 
> Matthew Dougherty
> National Center for Macromolecular Imaging
> Baylor College of Medicine
> ________________________________________
> From: Tom Goddard [goddard at sonic.net]
> Sent: Wednesday, November 27, 2013 3:41 PM
> To: Dougherty, Matthew T
> Cc: chimera-users at cgl.ucsf.edu
> Subject: Re: [Chimera-users] features/question: stereo3d and volume rendering
> 
> Hi Matt,
> 
> 1)  Chimera saves most of the camera stereo settings in session files: eye separation, angular field of view, and focal plane position.  But it doesn't save screen width, instead when you start Chimera it tries to figure out the width of the actual screen (which depends on what computer you open the session on).  I say it "tries" to figure out the screen width.  On a Mac it appears to correctly get the width of the screen in pixels, but then it converts that to inches assuming 72 pixels per inch instead of using parameters for your actual screen.  So for my screen which is about 24 inches wide, Chimera starts saying it is 35 inches wide.  Distance to screen is related to screen width and field of view angle: tan(a/2) = (w/2)/d.  Another factor is that if you resize the Chimera window the distance to screen changes.  That may seem odd, but we decided it is better to keep the angular field of view fixed during the resize.  So if you resize the window twice as wide, then in als!
> o doubles the distance to screen to keep the field of view fixed.  Given this set of quirky behaviors I suggest the way to get reproducible results is don't change the screen width setting in the Camera dialog.  That setting is not saved.  Adjust only the field of view, the eye separation, and the focal plane position.  The trouble comes in when the screen width isn't the size of your actual screen.  The eye separation (expressed in inches or millimeters) is calibrated to the screen width (also expressed in inches or millimeters) in order to find the separation of the left and right eye cameras in molecular scene units (typically Angstroms).  So if your real screen width is 10 times larger than what Chimera says, then you should make the eye separation 0.2 inches instead of 2 inches, so the ratio of eye separation to screen width that Chimera is using matches the physical reality.
> 
>  This is pretty screwed up.  I run into this problem too using our stereo projector and saved sessions.  A basic difficulty is that Chimera gets the wrong screen size -- not too surprising since I have both a monitor and a projector hooked up, and it can't possibly know the the projected image size unless I tell it.
> 
>  Maybe the solution is to save the screen width in sessions and use that when you load the session ignoring what the computer thinks the actual screen size is.  That isn't good if I save a session and later view it on stereo screens of different sizes, then saving the screen width in the session will give bad results, while the current Chimera behavior of trusting what the operating system says is the screen width would be better (if that value is actually right).  So I guess the current Chimera behavior is actually best if the operating system tells Chimera the actual screen width.  But in practice, especially with a projector, it tells Chimera the wrong screen width.  So maybe the rule should be that if you type in a new screen width in the Camera dialog, overriding what the operating system said the screen width was, then it should be saved in sessions.  And once you override it and it goes into a session, that session will always override the screen width when you load !
> it.  I think that will fix your problem and mine without the above tinkering with the eye-separation value.  So I've changed it -- in tonight's Chimera daily build.
> 
> 2) When you open the same density map multiple times in Chimera it only caches one copy of the data in memory.  The out-of-memory problem is probably because you are using solid style (volumetric) rendering for all of them, and each copy separately allocates the 3-d color array (OpenGL textures).  Those will consume 4 bytes per displayed grid-point.  Even if you undisplay a map it keeps the 3-d color array for solid style rendering unless you switch the display style to surface or mesh.  I guess you show just one copy at a time since Chimera does not properly display two transparent models.  So one solution is to change the display style to surface for undisplayed models to free up the large 3-d color array used for solid style rendering.  Another solution is use the "minimize texture memory" option in solid rendering options -- that will just use a 2-d color array over and over for every slice of a map and render slowly but use less memory.
> 
>        Tom
> 
> 
> On Nov 27, 2013, at 9:54 AM, "Dougherty, Matthew T" wrote:
> 
>> 1) I have been doing S3D animations.  One of things I am realizing is that I need to frequently adjust the interaxial and screen distances.  Sometimes my IA might be 1mm or it might be 23 inches depending on the shot.  Currently chimera does not maintain this camera metadata, so every session I need to reset the values which makes my animations slightly different.  Since rendering can take several hours, one typo can trash the effort.  Any suggestions on a workaround?  There is also a linkage between IA, horizontal field of view, and screen distance, sometimes I think I get caught in a hysteresis loop trying to converge to the desired shot.  As a note, I pretty much stopped using the zoom and have gone to dolly, keeping the scale to one.
>> 
>> 2) when doing volume rendering of electron tomograms I am finding it useful to load the same mrc map file in multiple times, such that each model has a different transfer function relating to different densities or structures.  I can currently load seven, but four must be changed from floating point to byte in order to prevent memory lockout thrashing, error messages, etc. Is there a way to load a single map file in and have multiple models and transfer functions for the solid viewing?
>> 
>> 
>> Matthew Dougherty
>> National Center for Macromolecular Imaging
>> Baylor College of Medicine
>> _______________________________________________
>> Chimera-users mailing list
>> Chimera-users at cgl.ucsf.edu
>> http://plato.cgl.ucsf.edu/mailman/listinfo/chimera-users
>> 
> 
> 
> _______________________________________________
> Chimera-users mailing list
> Chimera-users at cgl.ucsf.edu
> http://plato.cgl.ucsf.edu/mailman/listinfo/chimera-users
> 




More information about the Chimera-users mailing list