Jump to content

Help drawing 3d shapes in 3d picture control


AusTEX

Recommended Posts

I need some help figuring out how to draw decent looking shapes in the 3d picture control. In my application I need to draw either a cylinder or a cube within another cube. If the top and bottom dimensions of the cube/cylinder are the same I can use the built in functions to draw them and everything looks pretty good. If I need for the top and bottom to be a different size I have to use a mesh to draw the shape I want and this is where I am running into trouble. If I want to draw a cylinder where the top and bottom have a different diameter I create three meshes, one each for the top, body, and bottom. When I do this the resulting cylinder doesn't look as good as the cylinder drawn with the built in function. The same is true for drawing a cube using meshes for the top, body, and bottom. So how can I make the shapes drawn with the meshes look similar to those drawn with the built in functions? There are several settings that must be specified when using meshes that I think may be the source of my trouble, such as the color and normals array. Attached is a set of VI's that show what I'm talking about. The main two are "~Draw Layer With Cube" and "~Draw Layer With Cylinder".

The other related problem I can't figure out is how to make the picture control automatically show everything being drawn (like auto scaling on a graph). If I change the size of the cube I want to draw I'll reach a point where I can't see all of it unless I zoom out. I tried the 'AutoFocus' method but this changes the camera view/orientation. Is there a way to programatically zoom out to show everything (or something similar)?

Here's the code

3D Shapes.zip

Link to comment

I hardly had time to study it deeply, but you definitely messed up something in normals calculation. There are also few properties which make things nicer, but the key point is correct normals calculation. Write a small VI to visualize your normals for debug purposes.

Link to comment

Thanks for the info. The VI I used to calculate the normals is called 'calculate normals.vi' (I saved it using a different name) that I found in the 'Using Meshes.vi' example that ships with labview 2010. I'll have to go back and look at what that vi is doing. Do you know if there is a way to get the normal array info that is used with the built in functions for drawing cubes and cylinders?

Also, can you be more specific about the other properties that make things nicer?

Edited by AusTEX
Link to comment

I looked at the 'calculate normals.vi' and it looks like it might be specific to the example it is used in instead of a general purpose vi. I am clearly ignorant about the different mesh settings, especially when it comes to the normals array. So lets assume that I want to draw a square surface. I create a vertex array with the following points:

0,0,0

1,0,0

1,1,0

0,1,0

so this should give me a square in the XY plane and would define a primitive (am I correct so for?). If I set the ColorMode and NormalMode to 'Per Primitive' then should the ColorArray and NormalArray each contain just one element? Does the NormalArray need one or two points to define the surface normal? I would think there would be two possible values for the NormalArray depending if you are looking at it from the top or the bottom. Is this correct? Assuming you were looking at it from the top what does the NormalArray need to be? I would think if it were defined by one point it would be (0.5, 0.5, 1) or if two points it would be (0.5, 0.5, 0) and (0.5, 0.5, 1). If you were looking at it from the bottom the X and Y would be the same but the Z value would be -1. Also, if this is correct what importance is there to the Z value? Should it be 1 or 100 or does it matter?

Link to comment

I looked at the 'calculate normals.vi' and it looks like it might be specific to the example it is used in instead of a general purpose vi. I am clearly ignorant about the different mesh settings, especially when it comes to the normals array. So lets assume that I want to draw a square surface. I create a vertex array with the following points:

0,0,0

1,0,0

1,1,0

0,1,0

so this should give me a square in the XY plane and would define a primitive (am I correct so for?). If I set the ColorMode and NormalMode to 'Per Primitive' then should the ColorArray and NormalArray each contain just one element? Does the NormalArray need one or two points to define the surface normal? I would think there would be two possible values for the NormalArray depending if you are looking at it from the top or the bottom. Is this correct? Assuming you were looking at it from the top what does the NormalArray need to be? I would think if it were defined by one point it would be (0.5, 0.5, 1) or if two points it would be (0.5, 0.5, 0) and (0.5, 0.5, 1). If you were looking at it from the bottom the X and Y would be the same but the Z value would be -1. Also, if this is correct what importance is there to the Z value? Should it be 1 or 100 or does it matter?

Here is a simple mesh defined with both quads and quad strip:

post-7450-0-92741600-1294647968_thumb.jp

As you see the number of faces may be determined from Indices array as L/2 for the latter and L/4 for the former (L- length of indices array). For both cases length of vertices array is 10. Normals are for visual approximation of surface smoothness. They are used to calculate shading on particular face. For smooth surface normal vector is defined for each point of it. For mesh based approximation they are defined either for each vertex or each face (per vertex or per primitive normal binding respectively). So the normal is "tied" either to vertex or center of the face. For all the other points (between vertices) renderer interpolate normals to calculate proper shading. The very proper way of processing when calculating mesh from mathematical representation of a surface is to calculate vertices from surface parametric equations and normals from the same equations. If you calculate normals from faces you may loose some information (shape or surface orientation if it matters). But it is acceptable in many cases, including yours (if you know orientation of faces - note that order of each 4 indices in "quads" case may not be kept) and "per primitive" normal binding is better here (for calculating normals from equations it is more convenient to calculate them for vertices).

Normals have to have the length of 1, unless you set Specials.Autonormalize to "on". All normals should point to the same side of the surface (outside or inside in case of cylinder) - later you may choose which side to display with face culling property ("front" is the direction pointed by normals). Color and normal binding nodes are independent.

You definitely have to concentrate on calculating normals before beautifying. Note that half of normals calculated for cylinder side are (0,0,0)...

Look at my 3D Surface Editor as an example of mesh generation and display.

Link to comment

Thanks for the info. I'm still trying to wrap my head around the normals array and how many points it takes to define 1 surface normal. You mention that the length of the surface normal must be 1 (except for special cases) so does this mean you must have two points to define a surface normal? If not, how is the length calculated?. If I have a square surface in the xy plane defined by four points with its center at 0,0 and I have the surface normal binding mode set to 'per primitive' then if I read your info right the surface normal should be relative to the center of the surface face at (0,0,0). So would the surface normal array be:

an array with one element (0,0.1)

or an array of two elements

(0,0,0) and (0,0,1)

or am I completely misunderstanding the whole surface normal concept?

Link to comment

Thanks for the info. I'm still trying to wrap my head around the normals array and how many points it takes to define 1 surface normal. You mention that the length of the surface normal must be 1 (except for special cases) so does this mean you must have two points to define a surface normal? If not, how is the length calculated?. If I have a square surface in the xy plane defined by four points with its center at 0,0 and I have the surface normal binding mode set to 'per primitive' then if I read your info right the surface normal should be relative to the center of the surface face at (0,0,0). So would the surface normal array be:

an array with one element (0,0.1)

or an array of two elements

(0,0,0) and (0,0,1)

or am I completely misunderstanding the whole surface normal concept?

Normal is a vector of length 1, perpendicular to the surface. It is only a direction, you don't have to specify any starting point. So the length of normals array should be equal to number of faces for "per primitive" binding or equal to the number of vertices (not indices) for "per vertex" binding.

For the first face on my picture normal may be calculated as: ((v1-v0)x(v9-v0))/abs((v1-v0)x(v9-v0)) ("x" is cross product, vn is vector denoting position of n-th vertex)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.