Jump to content

OlivierL

Members
  • Posts

    76
  • Joined

  • Last visited

  • Days Won

    3

Posts posted by OlivierL

  1. Moving the discussion about the pertinence of always having a front panel for every VI to its own thread.  The discussion was started here by Jordan.

     

    Nothing new of course as GregS suggested it years ago.

     

    In short, the idea popped after talking about a FP automatic clean up feature!  it takes time to align the elements on the FP to make it look good and it is not useful on most sub VIs other than sometimes when troubleshooting.  When delving into the code, it would be much faster to just open block diagrams as you double click on VIs (and have half the windows opened.)  Icons/terminals on the Block Diagram could display the controls/indicators if need be.

     

    There is also others who think that instead of having none. having multiple FPs could be useful at times to display the information differently.  I guess that if we discuss about 0 or 1, we might as well think of 0, 1, ...n ;)

     

    So, the Front Panel, do we really always need it?  Could it be removed altogether from some VIs or just hidden and put on the back seat behind the block diagram?  Do we need many flavors of them in a single VI?

  2. Because the FP can be very helpful when debugging, I wouldn't support getting rid of it altogether.  However, VIs should have a property that hides the FP and only shows the block diagram (and show the connector pane on the BD screen as well).  One major benefit of this would be to have only half the amount of windows opened when troubleshooting a deeply nested VI and getting to your point of interest faster than having to press "Ctrl-E" every time you open a VI.  By allowing to hide FP and let a BD window open by itself, you could even have a feature that opens only the BD of a VI when the "Ctrl" key is down regardless of the VI's properties.

     

    Since the FP should stay, I still support Jack's original idea and Mark's tool to have a FP auto clean-up so that FP look clean when you need them while not t wasting time with them when you don't.  Nowadays, unless you spend some time organising the controls on your FP, your programs can "look bad" even if you have the cleanest code on the BD which really is the only thing that matters.

     

    Regarding the notion of a VI being a "Virtual Instrument" and therefore requiring an interface, I think it's time to acknowledge that only a minority of VIs are actually visible to the users and to support the idea of the code be the most important part of the program instead the graphical representation of the function's I/Os which serves little purpose 95%+ of the time.

    • Like 1
  3. The new trend of this topic got me digging into the information from Jack's post and I'm quite happy to have found Mark Balla's "SubVI fixer". I'm still working on some LV2009 projects so the automatic 4x2x2x4 is useful on top of the FP auto-alignment.  Look here if like me you've missed it 4 years ago:  http://lavag.org/files/file/128-fp-subvi-fixer-ver-6-lv-2009/ 

     

    I Definitely Kudo'ed Jack's idea.

     

    On the original topic, VIs available on the palettes should always be 4x2x2x4.  Those 5x3x3x5 do make BD look ugly until the bends get hidden behind the "unconventional" VIs.  Aligning the icons is just as important as straight wires...

  4. Mike,

     

    I had a very similar issue this week.  In my case, I had a semaphore that was created in a VI (let's call it ) and I was saving that reference in a shift register for future calls.  Everything worked fine at first but every now and then, I had error 1111.  After investigation, I found out that the VI might be first called from within a different thread and the reference was lot when the thread finished.  That separate thread was a different GUI, running in parallel with the main application, launched with the "Run VI" "Invoke Node". In the "normal case", as long as that VI was first called from within the thread of the main application, the reference stayed in memory until the application completed.

     

    To solve my problem, I created the named semaphore in the main application during my Init state.  I hope that this help.

  5. Well, you can find the signal's baseline, mean, median and maximum over time, you can very likely adjust the threshold automatically and obtain pretty good results.  Use other VI from the "waveform" palette to perform those measurements.  Playing with these parameters should allow you to adjust the level automatically and get pretty good results.  I've never done it with an ECG but I've had other applications before where I had dynamic threshold. 

  6. Therefore, my recommendation for drbrittain would be to use the TCP/UDP communication method.  I personally use the AMC (http://zone.ni.com/devzone/cda/epd/p/id/6091) library from NI a lot within a single LabVIEW instance but there are VIs in there to communicate seamlessly accross different machines over UDP I believe.  You will need to tweak the code wherever "AMC_UPD port.vi" is called to use two different ports since you are on the same IP address.

     

    Otherwise, I just did a quick test with Shared variables and they successfully communicate accross LabVIEW instances (2009-2011) and should also work between 32 and 64 bits.  You can use a few variables to allow each process to know what the other processes are currently doing.  You can even use those as queues if you configure them as FIFOs. (http://www.ni.com/white-paper/4679/en)

  7. I was just reading this topic and realized the suggestion to use Notifiers across LabVIEW instances (they have to be different instances between 64bits and 32 bits).  I never ran into a situation requiring this personally but does this mean that Notifiers (named) (and possibly Queues (named)) allow communication between LabVIEW instances?

     

    I looked at the Help file and under Notifier, it says:

    "Note If you obtain a notifier reference in one application instance, you cannot use that notifier reference in another application instance. If you attempt to use a notifier reference in another application instance, LabVIEW returns error 1492."

     

    Doesn't this imply that it is not possible?

     

    There might be something more fundamental I don't understand in this issue since I thought it was not possible to install and run both 32 and 64 bits on the same PC...

  8. There is an easy way to open a TCP connection using the TCP interface of your choice. I can't find the code I wrote in the past using this technique but I'm 98% certain that this is right though:

    Use "VISA Open" and specify the Ressource Name to be: TCPIP SOCKET TCPIP[board]::host address::port::SOCKET TCP/IP Socket

    [board] allows you to select which interface you wish to use. You can then talk to the device using regular VISA VIs. For more information, look up "VISA Resource Name Control" in LabVIEW help files.

    Hope this helps.

    Olivier

  9. Amila,

    I had to do something similar not so long ago and one of the tools I found useful was (from memory) "pattern matching". One example in NI Vision set is a rotating part (that is with Vision 6 though...) that is located, with its angle with respect to it's original position. Take a look at the examples coming with Vision to learn how to use pattern matching but it could really well offer you a good alternative and an easier way of finding your object. One of the only constraints is that the object should be the same shape, always.

    Hope this helps.

  10. QUOTE (Yair @ Jan 8 2009, 12:26 PM)

    Things which are usually liable to cause memory leaks are resizing operations in loops (e.g. Build Array, Concat Strings) and opening references without closing them.

    About the Build Array and Concat Strings, isn't LabVIEW supposed to release the memory automatically by itself after the loop is over? I can see that as taking huge amount of time in a long loop process but can memory leaks really happen from that?

    Olivier

  11. Thanx NevilleD,

    That's really neat and faster and more compact than always going with "Index array"("Array Length" -1). I tried it and was surprised to see it working, even if you change the length of it. If you do that, without an index, it takes the "n" last elements in the array. LabVIEW Help file specifies this behavior. Neat!

    I'll reuse that one for sure.

    Olivier

  12. Dear faisaldin,

    I took a quick look at your code and I don`t see anything wrong at first. You have to give us more information if you want any valuable answers. Tell us what you have done so far (for debugging it) and what is not working exactly.

    For example, could you clarify what happened when you shorted the TxD and RxD with another test application. Was it successful? Could you read what you were writing out on the port? Are you sure that the cable is crossed and not straight (i.e. TxD gets into Rxd of the instrument and vice-versa.)

    Also, what are the errors you are getting right now?

    Olivier

  13. I agree with you guys that is would be a really nice feature to have for multiple dimension. Also, I like the idea of the permute function and it should exist in LV. In the mean time, I can share this VI I wrote some time ago to "Transpose" a 3D array. Its input is a 3D array of double and you specify which dimension remains unchanged.

    Olivier

  14. --------------------------------------

    Hello Olivier

    Thank you for the example. I tried to do the work replacing the input array by an pixel array from an array. But towards the end part of your VI , i didnt understand the use of (x+2y) ? What is the use of that one?

    thnksss

    ------------------------------------------------------------------

    x+y 2 is just the default name LabVIEW gave to the result of the addition. There is nothing special there other than adding and dividing by two (since you are doing your operation on two copies of the same array, only in different orientation, row and column). This means that you need to do your interpolation on both "branches" of the code if you want it to be done in both dimensions and then merge those results together. I hopw this is a bit more clear now.

    In my example, X and Transposed array are the individual results of your treatment in both dimensions, prior to being joined back together. Both treatment could potentially be done sequentially instead of in parallel and the results in one of the dimensions would be more accurate! *** Actually, as I realize it, this might be a better solution for interpolation, to go in one dimension first, then transpose and treat the second dimension. It will definitely save you time and trouble because your array sizes won't match in parallel. ***

    Cheers,

    Olivier

  15. Horatius,

    Other options that haven't been said so far are to use the "FirstCall?" VI. It returns TRUE only once after hitting the Run button. Using that and a case strcture afterward, you can reinitialize your indicator once per Run. Using Property node is easy and an efficient way of accessing the value of indicators and controls. Left click on the Property node to change the type of information and you can swap between Read/Write.

    Is it good to know that this VI will return TRUE everywhere, once per execution which means that you can use it in as many places as you want in your code.

    Hope this helps.

    Olivier

  16. v_pan, I have done something similar recentl, with a PIC as well. The hard thing to program is definitely on the PIC side, allocating the correct ressources and Socket type for your connection. LabVIEW is really easy to use once you've been through the tutorials.

    About what Minh said in the last post, unfortunately PIC are not ARM microcontrollers and I doubt that NI has any compiler for those yet. You will still have to write your code in good old C! =)

    Best of luck!

    Olivier

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.