2 Pages12>
Options
Go to last post Go to first unread
3dcheapskate  
#1 Posted : Thursday, March 31, 2016 2:47:22 AM(UTC)
3dcheapskate

Rank: Advanced Member

Joined: 3/18/2016(UTC)
Posts: 220

Thanks: 6 times
Was thanked: 4 time(s) in 4 post(s)
Back in 2010 on pages 3/4 of Helgard's "Underwater Submarine" thread at Renderosity, bagginsbill's explanation of how to make the Poser atmosphere's depth cue use a more correct non-linear falloff ended with a tantalizing "...can you figure out how we can calculate the distance from the camera to the object?"

I recently tried to figure it out, posting this Distance From Camera To Point Being Rendered ? thread on RDNA's Node Cult and adding this post to the original Underwater Submarine thread.

After re-reading bagginsbills posts I managed to come up with a 15 node network that should give me the distance between the camera and point being rendered, with the proviso that I have to manually enter the x, y, z coordinates of the camera into the approriate nodes.


Edited by user Monday, April 4, 2016 3:30:58 AM(UTC)  | Reason: Not specified

3dcheapskate attached the following image(s):
xIn15.jpg
I'm still on Poser 9 and/or PP2014...
3dcheapskate  
#2 Posted : Thursday, March 31, 2016 2:56:11 AM(UTC)
3dcheapskate

Rank: Advanced Member

Joined: 3/18/2016(UTC)
Posts: 220

Thanks: 6 times
Was thanked: 4 time(s) in 4 post(s)
Bagginsbill replied on the original Underwater Submarine thread with the hint that in FireFly vector math and color math are the same, so I did a rethink and came up with this, which is an attempt to implement the complete formula (4 nodes instead of 15 for the camera-to-point distance)..

b = x / (1 - .5 ^ (x / h))

..where x is the distance from camera to the point being rendered, h is the half-distance, and b is the corrected value that plugs into DepthCue_EndDist Basically (if I've got it right) Color_Math_2 (Subtract) should simply be doing vector subtraction, Color_Math (Abs) should be ensuring all three components are positive, and when the result of that is plugged into the Math_Functions (Divide) node it should be using the scalar value (length) of the vector. Three nodes, three 'should'.s... I think I've got one or more of those should's wrong - middle distance objects still appear to vanish into the haze.

Edited by user Thursday, March 31, 2016 3:28:01 AM(UTC)  | Reason: Not specified

3dcheapskate attached the following image(s):
maybe.jpg
I'm still on Poser 9 and/or PP2014...
bagginsbill  
#3 Posted : Thursday, March 31, 2016 6:14:32 AM(UTC)
bagginsbill

Rank: Advanced Member

Joined: 2/25/2016(UTC)
Posts: 76

Thanks: 1 times
Was thanked: 24 time(s) in 19 post(s)
Almost! The Color_Math Abs isn't the distance yet but you're using it like it is.


That node you showed is abs(P - C) or breaking it down into components it is the vector( abs(dx), abs(dy), abs(dz) ).

What you NEED it to be is sqrt( dx ** 2 + dy ** 2 + dz ** 2)
where ** means exponent. This is a number, not a vector. You have not yet converted the vector to a magnitude.


Keep going. Doing this stuff keeps you from getting old.

Edited by user Thursday, March 31, 2016 6:23:02 AM(UTC)  | Reason: Not specified

bagginsbill  
#4 Posted : Thursday, March 31, 2016 6:17:59 AM(UTC)
bagginsbill

Rank: Advanced Member

Joined: 2/25/2016(UTC)
Posts: 76

Thanks: 1 times
Was thanked: 24 time(s) in 19 post(s)
Note: When preparing to compute the magnitude, you will need to square each component of the vector. As a result of squaring, the negative numbers disappear anyway so no absolute value is required thus saving one node before you have to add some more. You're close!
3dcheapskate  
#5 Posted : Thursday, March 31, 2016 9:55:10 AM(UTC)
3dcheapskate

Rank: Advanced Member

Joined: 3/18/2016(UTC)
Posts: 220

Thanks: 6 times
Was thanked: 4 time(s) in 4 post(s)
Assuming I have two simple points correct then I think I have it:

1) The maths of a Color_Math node: If I plug two colors (r,g,b) and (R,G,B) into a Color_Math Add I think the result will be (r+R,g+G,b+B),i.e. the operator is applied to each channel independently.

If that's correct then vector( dx ** 2, dy ** 2,  dz ** 2) can be obtained by plugging vector( dx, dy,  dz) into both inputs (set to white) of a Color_Math Multiply

2) The automatic color to number (or vector to scalar) conversion when you plug acolor/vector output into a number/scalar input: The clearest example is plugging an Image_Map into a Math_functions to get a greyscale image. I think the number is (R + G + B) / 3


If that's correct then sqrt( dx ** 2 + dy ** 2 + dz ** 2) can be obtained by plugging vector( dx ** 2, dy ** 2,  dz ** 2) into a Math_Functions Sqrt with the Input value set to 3.0
I'm still on Poser 9 and/or PP2014...
bagginsbill  
#6 Posted : Thursday, March 31, 2016 3:11:03 PM(UTC)
bagginsbill

Rank: Advanced Member

Joined: 2/25/2016(UTC)
Posts: 76

Thanks: 1 times
Was thanked: 24 time(s) in 19 post(s)
Exactly right! Brilliant.
isaoshi  
#7 Posted : Thursday, March 31, 2016 3:16:22 PM(UTC)
isaoshi

Rank: Advanced Member

Joined: 5/22/2015(UTC)
Posts: 60

Thanks: 4 times
Was thanked: 3 time(s) in 3 post(s)
Note that the dolly camera is the only one for which you can use the DollyXYZ parameter values directly in this shader.

The other perspective cameras' DollyXYZ parameter values do not represent their actual position.

AmandaLevres  
#8 Posted : Thursday, March 31, 2016 6:24:52 PM(UTC)
AmandaLevres

Rank: Member

Joined: 2/28/2016(UTC)
Posts: 11

Originally Posted by: isaoshi Go to Quoted Post
Note that the dolly camera is the only one for which you can use the DollyXYZ parameter values directly in this shader.

The other perspective cameras' DollyXYZ parameter values do not represent their actual position.



So, for "Main Camera" could you not in frame 1 set everything to zero (trans and rotation) add a primitive (e.g sphere again zeroed), parent to the camera then use the sphere's x,y and z globals in the params?


Amanda
3dcheapskate  
#9 Posted : Thursday, March 31, 2016 9:48:12 PM(UTC)
3dcheapskate

Rank: Advanced Member

Joined: 3/18/2016(UTC)
Posts: 220

Thanks: 6 times
Was thanked: 4 time(s) in 4 post(s)
isaoshi raised a good point - only the dolly camera is actually at the location specified by the camera's dollyX, dollyY and dollyZ.
(Edit: N.B. In the original Underwater Submarine thread bagginsbill's solution to this "distance from camera to point being rendered" was part of a complete underwater environment package (never completed but available as is), which included a Python script to keep the camera x,y,z in this part of the shader up to date. That script should work for both dolly and orbiting cameras. This bagginsbill post in the Getting the same view from dolly/orbiting camera thread indicates that I've come full circle !)

Amanda: there were a couple of threads over at Renderosity where I was trying to work out how to find the exact X, Y, Z coordinates in world space of the main camera - this one Focal Length Value For An Exact 90 Degree FOV ? is the only one I can find at present, specifically the discussion from the second to last post on the first page to the end of the thread.

Edit: just found this one Getting the same view from dolly/orbiting camera as well

I've attached the diagram* from the first post on the second page of the Focal Length Value For An Exact 90 Degree FOV ? thread (which I think is correct).

Knowing now that Firefly treats colours and vectors the same, it should be possible to do this maths in the material room too... but whether anybody would actually want to do that is a different matter entirely !

*the exact value for the  "about 113.5 inches" is 113.52000 inches, as calculated later on the second page of that thread

Edited by user Saturday, April 2, 2016 2:45:29 AM(UTC)  | Reason: Tidying up some of the word

3dcheapskate attached the following image(s):
revolvingcameraposition.png
I'm still on Poser 9 and/or PP2014...
3dcheapskate  
#10 Posted : Thursday, March 31, 2016 10:04:07 PM(UTC)
3dcheapskate

Rank: Advanced Member

Joined: 3/18/2016(UTC)
Posts: 220

Thanks: 6 times
Was thanked: 4 time(s) in 4 post(s)
Originally Posted by: bagginsbill Go to Quoted Post
Exactly right! Brilliant.


Thank you - that's the first time I've ever put my assumptions (1) and (2) in writing, and doing so suddenly made a lot of things much clearer.

Plugging my Abs(P-C) into the Math_Functions would convert vector( abs(dx), abs(dy), abs(dz) ) to (abs(dx) + abs(dy) + abs(dz)) / 3, which is rather different from the sqrt( dx ** 2 + dy ** 2 + dz ** 2) that I was after
I'm still on Poser 9 and/or PP2014...
bagginsbill  
#11 Posted : Friday, April 1, 2016 12:52:41 PM(UTC)
bagginsbill

Rank: Advanced Member

Joined: 2/25/2016(UTC)
Posts: 76

Thanks: 1 times
Was thanked: 24 time(s) in 19 post(s)
If you have the UnderWater package, go into the Runtime Python poserscrypts folder and have a look at UnderWaterParameterAutomation.py. Look in the method parameterAutomationCallback, which, IMO, is pretty readable. You should be able to see how it keeps the camera up to date. It also moves the water plane and the envsphere to adjust to the depth you want to simulate.

First it gets the current camera:

cam = scene.CurrentCamera()

Then it gets the camera position:

pos = cam.WorldDisplacement()

Locates the atmosphere shader

shaderTree = scene.AtmosphereShaderTree()

The shader has a node with label 'PM;Camera Position'. It fetches that node (and others) in one call to a handy method i wrote. (Replacing lots of details with ellipses here)

posNode, unitNode, ... = self.fromShaderTreeGet(shaderTree, 'PM:Camera Position', 'PM:Unit', ...)

There is some logic to deal with user-selectable distance units (unitNode) that end up in a variable, k. This is used later.

It keeps track of the last camera position it applied in self.lastCameraPosition. If the new position differs it sets it by

for i in xrange(3): posNode.Input(i).SetFloat(pos[i] * k)

Feel free to use this bit in your own script.


thanks 1 user thanked bagginsbill for this useful post.
3dcheapskate on 5/31/2018(UTC)
isaoshi  
#12 Posted : Friday, April 1, 2016 2:12:31 PM(UTC)
isaoshi

Rank: Advanced Member

Joined: 5/22/2015(UTC)
Posts: 60

Thanks: 4 times
Was thanked: 3 time(s) in 3 post(s)
Originally Posted by: AmandaLevres Go to Quoted Post
So, for "Main Camera" could you not in frame 1 set everything to zero (trans and rotation) add a primitive (e.g sphere again zeroed), parent to the camera then use the sphere's x,y and z globals in the params?

Amanda


Unfortunately not, Amanda. As soon as you parent the primitive to the camera, its xyzTran parameters show its position relative to the camera, not its world position.


If you're happy to put the camera xyz position into the shader manually, here's an alternative method:-

Run the python script MainToDolly.py, and then use the Dolly camera xyz values.

The script copies WorldDisplacement() (and other settings such as rotation and focal length) from the Main camera to the Dolly camera, so that it exactly mimics the Main camera.


(I've renamed my copy of this script DollyToMain.py, which better describes what the script does, rather than how the code does it).


Edited to add: this of course assumes that you are rendering with the Main camera!

Edited by user Friday, April 1, 2016 2:48:23 PM(UTC)  | Reason: Not specified

isaoshi  
#13 Posted : Friday, April 1, 2016 2:40:42 PM(UTC)
isaoshi

Rank: Advanced Member

Joined: 5/22/2015(UTC)
Posts: 60

Thanks: 4 times
Was thanked: 3 time(s) in 3 post(s)
Also note that in this shader the PNode is set up for INCHES. 3Dcheapskate mentioned this, but I think it's worth highlighting.

If you don't use inches, you need to change the three values in the PNode.

The PNode output unit is one-thousandth of a Poser unit, which is 0.1032 inches.

Edited by user Friday, April 1, 2016 2:43:53 PM(UTC)  | Reason: Not specified

bagginsbill  
#14 Posted : Friday, April 1, 2016 3:32:53 PM(UTC)
bagginsbill

Rank: Advanced Member

Joined: 2/25/2016(UTC)
Posts: 76

Thanks: 1 times
Was thanked: 24 time(s) in 19 post(s)
Originally Posted by: isaoshi Go to Quoted Post
Also note that in this shader the PNode is set up for INCHES. 3Dcheapskate mentioned this, but I think it's worth highlighting.

If you don't use inches, you need to change the three values in the PNode.

The PNode output unit is one-thousandth of a Poser unit, which is 0.1032 inches.


It isn't 1/1000 of a PNU. It's .1 inches exactly. I have a thread where I prove it long ago but I can't remember where it is. 

However, here's a thread where I explain it: Units for P Node thread at RDNA


Edited by user Friday, April 1, 2016 3:58:51 PM(UTC)  | Reason: Not specified

bagginsbill  
#15 Posted : Friday, April 1, 2016 3:56:13 PM(UTC)
bagginsbill

Rank: Advanced Member

Joined: 2/25/2016(UTC)
Posts: 76

Thanks: 1 times
Was thanked: 24 time(s) in 19 post(s)
OK I've checked it once more and it's proved. The P unit is 1/10th of an inch exactly.

The middle box has a shader on it that measures P units in Y. It draws a red band up to 1032 P units, and a blue band up to 1000 P units.

The box on the left is yTran 103.2 inches (1 PNU). The box on the right is yTran 100.0 inches.

The bands match the boxes, thus proving that the unit of measure in the P node is 10ths of an inch exactly.


bagginsbill attached the following image(s):
PValue.png
bagginsbill  
#16 Posted : Friday, April 1, 2016 4:00:57 PM(UTC)
bagginsbill

Rank: Advanced Member

Joined: 2/25/2016(UTC)
Posts: 76

Thanks: 1 times
Was thanked: 24 time(s) in 19 post(s)
Note of caution: The red/blue shader does not produce the same output in SuperFly. Apparently SM messed up a very simple thing - the units of the P node are different, or the P node doesn't even work in SuperFly.
isaoshi  
#17 Posted : Friday, April 1, 2016 5:18:12 PM(UTC)
isaoshi

Rank: Advanced Member

Joined: 5/22/2015(UTC)
Posts: 60

Thanks: 4 times
Was thanked: 3 time(s) in 3 post(s)
Thanks for the correction.

Here's a beginner (me) python script to output the Main camera position, and avoid that nonsense with the Dolly camera:-

# Output the Main Camera position in inches

import poser

Cam=poser.Scene().ActorByInternalName("MAIN_CAMERA")
(x,y,z)=Cam.WorldDisplacement()

print " "
print "Main Camera position in inches:"
print "X = " + str(round(x * 10320) / 100)
print "Y = " + str(round(y * 10320) / 100)
print "Z = " + str(round(z * 10320) / 100)
gishzida  
#18 Posted : Saturday, April 2, 2016 12:16:53 AM(UTC)
gishzida

Rank: Advanced Member

Joined: 9/15/2012(UTC)
Posts: 73

Thanks: 25 times
Was thanked: 6 time(s) in 5 post(s)
Some silly questions from a bit of a dunce:

Why can't the Z-dolly, the X-orbit and the Y-orbit be read from the camera and fed to the calculations for the shader? [assuming that one writes a script to call the actor.parameters (if these parameters actually exist), calculate the XYZ values, and then insert them into a shader.] the reality here is that the coordinates of the camera are usually based on the Z Dolly and the two orbits rather than the xyz location of the camera. [that is if you are not hacking BB's script to get the coordinates]

Taking a slightly different tack--

Why is it that "focal distance" is different than the radial vector and / or is not useable for what is being attempted here?

"Focal distance" appears to be the radial distance of a 2d plane intersecting the object to be rendered from the location of the camera.  Since the point of the shader is to make visual / atmospheric changes (much like Focal distance and F-Stop are used for Depth of Field) it would seem that it would not matter what the actual XYZ coordinates were only the distance between the camera and the object... Coordinate rotations relative to the default "Universe" coordinate system do not matter since the "visual" coordinate system is always along an axis from the camera to the object... where the object represents XYZ=0 and the camera represents X-0, Y=0, Z =focal distance. [N.B. back in the 80's I used to program computer assisted/controlled coordinate measuring machines used to measure parts for the Space Shuttle engines so I do have some understanding of coordinate systems and coordinate system rotations].

The assumption here is that the camera is always going to be looking at the central object of the render and therefore only the actual radial distance from the camera to the object is actually important  If this assumption is correct then the calculations for the shader would change based on setting the focal distance then reading that value and building the shader. ...or Am I being too simplistic about this-- I am making the assumption we're talking about light traveling through "air" rather than water without an "axial density gradient" [i.e. that the density of the atmosphere is relatively constant and not thicker or thinner along one of the axis of the scene]? 

As I said I'm a bit of a dunce on the internals of poser and shaders...  so don't kick me too hard...
Definition: Autodidact = 1) Self Taught; 2) An Idiot teaching the ignorant; 3) Me!
isaoshi  
#19 Posted : Saturday, April 2, 2016 3:45:15 AM(UTC)
isaoshi

Rank: Advanced Member

Joined: 5/22/2015(UTC)
Posts: 60

Thanks: 4 times
Was thanked: 3 time(s) in 3 post(s)
There's no need for the complexity of using XY-orbit and Z-dolly in a script to determine the camera location - it's available directly using WorldDisplacement().


On your second point, the purpose of the shader (as I understand it) is to adjust the (incorrect) default response of the depth cue node to give a more realistic fall-off. If you plug a fixed value, such as "camera to main object", into the node, this adjustment will no longer take place.

Edited by user Saturday, April 2, 2016 4:31:04 AM(UTC)  | Reason: Not specified

gishzida  
#20 Posted : Saturday, April 2, 2016 7:52:58 AM(UTC)
gishzida

Rank: Advanced Member

Joined: 9/15/2012(UTC)
Posts: 73

Thanks: 25 times
Was thanked: 6 time(s) in 5 post(s)
Originally Posted by: isaoshi Go to Quoted Post
There's no need for the complexity of using XY-orbit and Z-dolly in a script to determine the camera location - it's available directly using WorldDisplacement().

On your second point, the purpose of the shader (as I understand it) is to adjust the (incorrect) default response of the depth cue node to give a more realistic fall-off. If you plug a fixed value, such as "camera to main object", into the node, this adjustment will no longer take place.


O.k. so I'm not understanding how having the XYZ coordinates makes the light-fall off more realistic. All that appears to have been done is calculate an XYZ distance.... and while that might help in calculating other distances in a default coordinate system  say given:
:
LightXYZ
ObjerctXYZ
Camera XYZ

Distance between Light (XYZ) and Object (XYZ)= the difference between the two coordinates sets = DeltaLO [Light (x) - Object (x), Object (Y) - Light (Y), Object (Z) - Light (Z)] = XYZ distance from Light to Object

Distance between Camera (XYZ) and Object (XYZ)= the difference between the two coordinates sets = DeltaCO [Camera (x) - Object (x), Camera(Y) - Light (Y), Camera(Z) - Light (Z)] = XYZ distance from Camera to Object

Distance between Light (XYZ) and Camera (XYZ)= the difference between the two coordinates sets = DeltaLC [Light (x) - Camera (x), Object (Y) - Camera (Y), Object (Z) - Camera(Z)] = XYZ distance from Light to Camera

[ah! coordinate matrix math-- brings back memories!] but shouldn't any attenuation / fall off / scatter of the light be calculated on the basis of the line that the light traverses from the light source to the camera?

Light source --> Atmosphere --> Main Object --> Atmosphere --> camera

If the scene is rendered in a "vacuum" [i.e. there is no light scatter] there is no fall-off /attenuation / scatter so to speak... but when there is  "atmosphere" (air or water), shouldn't the fall-off /attenuation / scatter be calculated for both the light source and for the camera? Less light is likely to reach the camera since it passes through "atmosphere" twice and so the attenuation / scatter / fall off will be greater if you do.

One might assume that the attenuation / fall-off / depth cue (SIC) is just twice the calculated attenuation of the single shader [that is the fall off of light was twice as great since the light had to travel from its source >>> to the object >>> to the camera... but I don't think that assumption is correct since the distance from the light source to the object might be greater or less than the distance from the camera to the object.

If that is the case then shouldn't there be a second shader node which calculates additional light fall-off ? The purpose of this second node would be to calculate how much light is reaching the main object through the atmospheric media (air or water-- this might result in either scatter or attenuation depending on the "material" doing the scatter) while the purpose of the original shader node is to calculate fall-off from the object to the camera?


If this is not the case then I must be missing something... because a single calculation of light attenuation / scatter between the camera and the object does not seem to account for attenuation / scatter from the light source to the object
Definition: Autodidact = 1) Self Taught; 2) An Idiot teaching the ignorant; 3) Me!
Users browsing this topic
Guest
2 Pages12>
Forum Jump  
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.

Notification

Icon
Error