Jump to content

vugie

Members
  • Posts

    393
  • Joined

  • Last visited

  • Days Won

    32

Posts posted by vugie

  1. The attachments for this post appear to have gone missing. Where can I find VIpreVIEW these days?

    Here is the latest version I published (I hope). The current version (0.3) is completly rewritten (object oriented) and LabVIEW part is mostly done (parsing and template based script generation). The missing thing is good template and this is blocking me from further development, because it requires ActionScript programming skills which I have no time to learn (which is more complicated considering that I don't have Flash IDE and I use only OS tools). So if anybody here knows what is going on inside Flash, I'm open for cooperation. This would help to unfreeze this project. Of course if community needs something like this in days of snippets...

    VIpreVIEW-0.2.1.zip

  2. Kudos to someone named Miha who created The Flea's Knees font and posted it in her blog for all the world to use.

    In this font, all the alphanumeric characters in the plain roman face are no more than 4 pixels wide, yet surprisingly readable. Then she gives us italic, bold and bold italic forms! The font takes advantage of subpixel coloration. There's no font file that you can install to type in this font (because it is a full-color font, the normal type interfaces don't work right), but perhaps someone will want to extend the icon editor to allow typing in this font. Here's the finalized images from Miha:

    In fact, very nice. However it only works for LCD screens, and not all of them - those with RGB-RGB pixel scheme (most of LCD screens)

  3. Look carefully what are you doing: for each mouse click you get cluster of 3 coordinates, make a SINGLE element array of cluster of 3 coordinates (?), interpolate over this 1-element array (I don't know for how many samples you do it, but for default - linear interpolation, you'll get 1 usable sample and the rest will be Inf), and then create line out of it. So you create ONE line object, made of ONE point for EACH mouse click...

    Instead you should create a line object with null mesh outside the loop, keep its reference and at each mouse click add point to the buffer (kept in shift register for example), and UPDATE mesh of the line object out of WHOLE buffer.

    BTW, You use Mouse Down event. It is better to use Mouse Up, because Mouse Down fires continuously as button is hold.

  4. It is very specific problem for spatial interpolation, because of nature of source data. It is hard to put any general algorithm, however you may check for Kriging or Natural Neighbor Interpolation. But I would rather develop custom algorithm based on known geometrical relations between 2D sections dipped in 3D space.

    [EDIT]]

    I think that you may be interested in this

  5. It depends on how exactly you want to represent the curve. If it is ok to connect your points with straight lines you may feed directly their coordinates to mesh primitive with "line strip" mode. If you want more smooth curve, you have to do interpolation - this is as simple as 1D interpolation separately for each coordinate.

    I assume that you know the order of your points, in other case you have to sort them somehow before and it may be more complicated than interpolation.

  6. You said that you track for (x,y,z) coordinates of the probe. Does it mean that your 2D US images are always parallel to each other? If it is not the case you have to track also orientation of the probe. And I think you should do it if the probe is being moved manually (it is rather impossible to manually move such a probe keeping 3 orientation angles with certain precision).

    Next point - what is the nature of 3D information you want to calculate from subsequent frames? Do you want to have 3D volumetric image, reconstruction of surface of some object, set of iso-surfaces? Not in every case point cloud approach is really necessary. I.e. for volumetric image I would rather use some interpolation technique (if frames are dense enough). And volumetric image is good starting point for any surface extraction technique as there are many well established algorithms for it.

  7. I'm still for something little bit more controversial for the back. I.e. modification of my last year proposition:

    post-7450-017737700 1277800964_thumb.png

    Or something about the rest of private properties, or Xnodes, or this 1% of icon editor which is not open source (or its license which is ???), or questionable performance of InPlace structure, or any other hot topic.

    Let's show that we are independent!

  8. I didn't run your VIs, but my guess is that you get right coordinates - coordinates for rotated object. Because you place a sensor as a child of main object, its translation is relative to this object. Pick Point gives you absolute coordinates. You decided to use object transformation to mimic camera behavior. This approach has both advantages and drawbacks. The drawback is that you have to apply inverse transformation to each absolute point you get from Pick Point in order to have its relative location. Inverse transformation is easy to calculate: it is still 4x4 matrix with 3x3 rotation component, 3x1 translation component, 3 zeros and 1. You simply has to transpose rotation component (inverse matrix for rotation matrix is always equal to its transposition) and negate translation component.

    But I advice you rather to use camera as a camera (and either Setup Camera method or changing modelview matrix). Things will be much simpler then.

  9. I thought that by writing into modelview matrix i am updating it, didnt i ?:unsure:

    That part is OK, but in main loop you read modelview matrix, calculate camera vectors and use them with "Setup Camera" not modifying them at all.

    can you please elaborate bit more what do you mean by "BTW, did you try shift-dragging on the control (with perspective projection)?"

    VIDEO

    By pressing Shift magnifying glass appears. By pressing Ctrl panning cross appears.

  10. Dear Vugie,

    thanks for your example and also your great help on understanding the procedure. i tried to apply your sample on my vi, but the problem is when i run my vi, i have no control on camera orientation, it goes up, down , left and right, but not on my desire, i want when i zoom in, and want to see a point on left side (because of zooming it is not visible) can navigate the object to left. also here you can see my procedure for zooming, but when i want to zoom, i can do it in one face, my vi tries to keep the x and y fix and just change the Z (actually the imaginary depth of computer).

    after reading some Opengl codes and your hint on camera projection, i realised that there is a methode called frustum, i used it (as you can see on my shot) , i turned off the camera projection, and used it, it will do the zooming exactly like the zooming with mouse and keyboard but for navigation it will strech somehow, i just want to know your opinion, if it is a good idea or not?

    Thanks again,

    Trin

    p.s: i will attach screenshot and also the vi's

    Frustum defines field of view, but far and near planes are not for zoom - they are for clipping objects behind them. For orthographic projection modifying the rest of parameters may work like zoom, but the effect will be dependent on control proportions - I wouldn't recommend it. BTW, did you try shift-dragging on the control (with perspective projection)?

    You do strange things in your VIs... i.e. reading modelview, extracting orientation and applying it again without modification...

    You should relate values of camera movement to size of your object, to be sure that steps you are making are not 100x higher than the object.

    If you move camera to some absolute position/orientation, turn off native controller, bacause any click will be unpredictable.

    And assign some default values to y controls, because feeding zeros to Delanulay method crashes LabVIEW (brrrr, I had some calculations ongoing... throwpc.gif)

  11. you told me that i have to inactive my spherical camera? did i understand it correctly, if i do so, then i have no movement , even if i set up my camera manually, or may be i didnt understand it correctly

    It applies only to LV native controller. You can change its mode either with control's context menu or with respective property. You can still program your own camera behavior and user interaction. In the other topic (I guess that both refer to the same project) you wrote that you want to catch user clicks on the control. Because native controller bases on left mouse click it would be good to turn it off and implement your own (basing on buttons, touchpad-like area, right-clicking, clicking with modifier, accelerometer based device or anything you want).

    About the sample, can i use the array that of translation as a zooming factor? i mean the one that we multiply by Rotation matrix?

    The question is what projection do you use - perspective or orthographic. If the first you can simply use camera position as a zoom. For orthographic projection moving camera along viewing direction has no effect - you have to use projection matrix (from respective property), specifically first two components of its diagonal. You may use this method for perspective projection as well.

    and what do we mean by direction, is it saying how much far or near we are to eye of camera?

    Direction is an unit vector pointing along camera's optical axis. in terms of Setup Camera method direction = ( target - position ) / ( | target - position | )

  12. If you set Camera Controller to "None", current camera position is that one you recently set :)

    I tried to prepare simple example to show how to convert modelview matrix to vectors usable for Setup Camera method, but I found strange issue, which looks like a bug... so I have to dig a little bit more.

  13. Modelview is 4x4 matrix which consist of:

    - 3x3 rotation matrix in upper left corner

    - 3x1 translation vector at the right side

    - 1 in bottom right corner

    - zeros in the rest (1x3)

    try googling "modelview matrix" - this term is exactly the same as in OpenGL

    I think that most convenient method would be to use Setup Camera method for translation as well (storing current camera position and orientation and blocking native LV controller)

  14. Also, how i can get a reference to a 3D scene Mesh, which has a Normal array, as far as i understand the 3D scene mesh has a property node that make it possible to read the normal vector.

    Here is a method:

    post-7450-127624361356.png

    You have to check Draw Mode (triangles or quads) in order to know hoe to interpret Indices (refer to mesh example in LV). It's also good to check Normal Binding Mode (whether normals are defined per vertex, per face or not binded at all).

    • Like 1
  15. Ok. As I said it is only possible when object hit with mouse is a mesh (you can check with trying to downcast reference of object's drawable to mesh class and checking for an error). You are loading an external file, so you cannot be sure that it contains normal data (normals cause that objects look smoother). Lets assume that it does not contain such data and you have to calculate it from face orientation. So you have to find a face clicked with mouse. PickPoint method gives you only 3D coordinates not related to geometry. Use "Vertex Array" and "Indices" properties of the mesh to get information about faces of clicked object and than search for the face which contain point you acquired with PickPoint. When you find is calculation normal is as simple as calculating cross product of two vectors, but searching for the face is not so simple. PickPoint does this behind the scenes, but unfortunately it doesn't provide this information.

    There are a lot of methods to find a face: from relatively simple and slow to complex and fast. The simplest is searching faces one by one checking whether it contain a point (also few methods for that). What to use is dependent on complexity of your geometry, frequency of performing the search and any additional information you know about the geometry.

  16. I don't understand the context...

    All the normal data for the mesh is the data you provide during mesh inititialization (one normal vector either per vertex or per face). You may also use respective property to get Normal Array. It is not possible for standard objects like sphere or cone.

    If you have per vertex binding and you want to determine normal of point which lays between vertices, you have to make a weighted sum of normals form 3 (for triangles) or 4 (for quads) closest vertices.

    You have to remember that when the object is rotated or not uniformly scaled, norwar vectors should be transformed as well.

  17. Thanks for the idea, i am trying to use the "Setup Camera" method, but i really can't understand what is the functionality of Up direction. Any Idea?

    Look at the window at the place where you are. Remember the direction. Now stand on your head and look at the same direction. What is the difference between these views? The "Up direction"! :)

  18. "Setup Camera" is a method (by means of Invoke Node) of 3D Picture Control. It sets position of camera and direction of view, so that you may make buttons for moving camera around your object. Normally it can be done with setting proper camera mode (spherical, flying, etc.) and rotating scene with mouse without any additional coding, but it requires using modifier keys or middle mouse button (so I guess it's limited for touch panel to rotating which is realized with LMB).

  19. You may use "Setup Camera" method of 3D Picture Control in conjunction with respective buttons for scrolling and zooming. Or alternatively buttons for switching between zoom/pan/any_other_functionality and Mouse Down and Mouse Up events, but I don't know whether these events work with touch panel.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.