Jump to content

PJM_labview

Members
  • Content Count

    781
  • Joined

  • Last visited

  • Days Won

    10

Posts posted by PJM_labview

  1. LabVIEW 2020

    I am seeing an inconsistent behavior with a public vim calling a dynamic dispatch protected member. The calls is not broken as long as no child is made or loaded into memory. Once a child has been made (and the override created) trying to reconnect the call to use the child brakes the wire (at that point I am unclear what is the desired behavior).

    image.png.3a9c4eb18432e95d3f1d20e4d182b207.png

    call is not broken (but converted vim is broken with a scope error).

    Also, this code above just run fine (and the protected dynamic dispatch does as well).

     

    2020-06-13_16-07-19.png.2fc0c94860101b6ccb954b9114524a24.png

    everything seem fine (but converted vi is still broken)

     

    2020-06-13_16-08-17.png.bcf91984c5a1bf24506c598f1b962bbb.png

    code is broken and can not be "unbroken"

    class vim bug.zip

  2. On 4/6/2019 at 2:29 AM, drjdpowell said:

    I'm curious; what's the use case you have?

    Use case: The selection of the target VI will not be under my control so I have no knowledge about it ahead of time (so I cant do a CBR). I can detect other potential issue with the run VI method (ex: reserved for execution ) through the "Open VI" primitive but I can not detect lifetime related issues (shown in the example). I can script a wrapper (at edit time) around the target to do a static call (somewhat similar to what actor framework do with message class), but I would rather avoid doing this if I can help it as this introduce various complications.

  3. Lifetime of VI launched by the "run VI method" are managed by LabVIEW (meaning the caller do not own the the lifetime).

    Practically, as soon as the target VI terminate LabVIEW will dispose of the resource created by the target resulting - among other thing - for every references created within the target scope to die (see attached example).

    2019-03-30_15-42-35.png.713908f5df3dce1491801ce36ced705f.png

    I have a use case where this is undesirable and I would love to have the flexibility of the run VI method (generically address control on the target VI) while keeping ownership of the VI call chain in the launcher (like the CBR does).

    Note: as far as I know, what I am asking is not possible but I would love to be told otherwise.

    Thanks

    PJM

    Cross posted on NI forum

    rvm.zip

  4. Hi

    We have an application where we need to have a custom PCIe board transfer data to the PC using DMA.

    We are able to get this to work using NI-VISA Driver Wizard for PXI/PCI board.

    The recommended approach is to have VISA allocate memory on the host (PC) and have the PCIe board write to it (as seen below).

    9-30-2016 6-14-20 PM.png

    While this approach works well, the memory allocated by VISA for this task is quite limited (~ around 1-2MB) and we would like to expand this to tens of MB.

    Note: The documentation (and help available on the web) regarding these advanced VISA function (for instance "VISA Move out 32" and "VISA Move In 32") is parse. If someone has some deep knowledge regarding theses, please feel free to share how we could allocate more memory. 9-30-2016 6-36-01 PM.png

    Since we are not able to allocate more memory using the VISA function at this time, we investigate doing the same operation using the LabVIEW Memory Manager Functions which allow us to allocate much larger memory block.

    Below is the resulting code.

    9-30-2016 6-20-29 PM.png

    Unfortunately while we can validate that reading and writing to memory using this work well outside the context of DMA transfer, doing a DMA transfer do NOT work (although the board think it did and the computer is not crashing).

    We are wondering why this is not working and would welcome any feedback.

    Note: the DMA transfer implemented on the board requires contiguous memory for it to be successful. I believe that the LabVIEW Memory Manager Functions do allocate continuous memory, but correct me if I am wrong.

    To troubleshoot this, I did allocate memory using the LabVIEW memory manager function and try to read it back using VISA and I got a "wrong offset" error (Note: This test may not be significant)

    Another data point; while the documentation for DSNewPtr do not talk about DMA transfer, the one for DSNewAlignedHandle does. Experiment using LV memory manager Handles has not got us anywhere either.

    We would welcome any feedback regarding either approach and about the LabVIEW Memory Manager Functions capabilities in that use case.

    Thanks in advance.

    PJM

    Note: We are using LabVIEW 2014 if that matter at all.

     

  5. I have had to face the same issue in the past and I did not found a solution.

    It just occurred to me though, could you subpanel the background monitoring async VI into your probe and see if the subpaneled VI UI update (even though the probe is not)?

    I am not sure that this will work, but might be worth a quick try.

  6. I don’t have 2014 and can’t open your VI, but my first thoughts are to type cast the U64 array to an I16 array then do a FOR loop over the number of lines to do the chopping.   Why is this data not in I16 in the first place, BTW?

    I down-convert it to 2013 and 2011. Since that post yesterday, I got a slightly faster version that operate on each line (like you suggested) [see image below]. Also typecasting is not faster than split and interleave.

     

    post-121-0-49574600-1422813056_thumb.png

     

    If you can afford to be a frame or two behind, then you might want to split the resizing and unpacking into separate pipelines.

    This is an interesting suggestion. I will have to give this more thoughts.

     

    Thanks

  7. Hi Everyone.


     


    I am trying to figure out the most efficient way to manipulate somewhat large array of data (up to about 120 Megabyte) that I am currently getting from a FPGA. This data represent an image and it need to be manipulated before it can be displayed to the user. The data need to be unpacked from a U64 to I16 and some of it need to be chopped (essentially chop off 10% on each side of the image so if an image is 800 x 480 it becomes 640 x 480).


    I have tried several approaches and the image below show the one that is the quickest but there might be further optimization that could be done.


     


    I am looking forward to see what other can come up with.


     


    Note 01: I am including a link to the benchmark VI that has a quite large image in it so this VI is about 40MB.


    Note 02: This is cross-posted on NI Forum.


     


    Thanks


    post-121-0-57794300-1422749451_thumb.png

  8. 1st oddity: The documentation specify the following:

     

    "Clusters and enumerations defined as type definitions or strict type definitions—LabVIEW uses the label of the type definition or strict type definition as the name of the .NET structure"

     

    In reality I find that to be incorrect. Instead it used the typedef instance name. Am I missing something?

    Note: I know I can rename it, but there are very good reason while I would love to have it worked as described in the documentation

     

    post-121-0-64101100-1412219773_thumb.png

     

    2nd oddity: When a typdef is part of a class it does not show as such when call from a .net app.

     

    For instance if I have the following in my LabVIEW project:

    • A class is called shape
    • A method in the class is called getBound
    • A cluster typedef in the class is called Bound
    • An interop build spec where the assembly namespace is LVInterop

    Then when I call the resulting dll from c# I see:

    • LVInterop.shape.getBound (yeah, getBound method is part of the shape class [as expected])

    but I also see:

    • LVInterop.Bound for the Bound typedef (?? how come the bound type is not part of the class ?? [i would expect LVInterop.shape.Bound])

    Any feedback on these two oddities will be very welcome.

  9. Ya, I use the built-in parser to do that too.

     

    I am not entirely sure by what you mean about putting the icon from the "monster" class into an empty class but my goal is not to have our developer do extra work in order to be using an internal tool that, among other thing, get library thumbnails.

  10. Reviving that thread...

     

    ... its a binary file that has been escaped in order to be saved inside the .lvclass file....

     

    I am wondering if anybody got more information about the escape (or encoding) mechanism. Nothing obvious (such as base64 for instance) comes to mind looking at it.

     

    I would love to be able to get (read) the class icon without loading the class in memory.

  11. Like you we have been experiencing this slowdown for years as we routinely have project using classes with 5k or more VIs.

    In respect to "fabric" comment about refactoring/restructuring your code, this may help but it is a sad reality when you have to dump a perfectly valid architecture for a another one simply because the LabVIEW IDE make it unusable.

    With 9K VIs I shudder thinking how long it will take applying a typedef changes given that in our framework this sometime can take a couple of minutes.

    Regarding solutions: The single most bang for the buck that we have found is to NOT use the project. This help a lot. Just open the lvclass you need to work on in memory. Even if you where to open all your classes one by one, as long as you are not using a project the IDE will be significantly faster (compared to have the same classes in a project).

    By the way, I got fabric's Idea :thumbup1: to 103 now.

    • Like 1
  12. Pierre,

    We do have a customer application with similar statistic (6000+ VIs but only around 100+ classes) and the build does take about 15 hours. Like you we have a lot of problems with the build (it is very easy to brake it for no obvious reason and when it take so long to do a rebuild this can very quickly eat weeks of your time).

    Here is some info that might help you.

    • We notice that adding xcontrol can brake our build (ex: adding the 3dsurface plot xcontrol did that [took us 2 weeks to find that this was the issue]).
    • Make sure your build output is as close as possible to the root folder (ex: C\build) so you wont hit the "path too long" issue that plague windows OS.
    • Build that fail do sometime succeed when we do a control+shit+run arrow (re-compiles everything in memory) and a resave.
    • Build that do fail on "regular" 32-bit OS do sometime succeed on 64-bit OS.
    • Sometime adding more memory to the build machine helps.
    • When none of the above succeed we have to try to figure out what was change since the last successful build. Note: we try very hard to do build very frequently to alleviate these issues.
    • We have several machine (for instance Virtual Machine) where we can try several builds in parallel to speed up the debugging.

    So my first suggestion would be to let the build go until you get an error as it can take a very long time. Also, if you have not done so, make sure you can try the build on several machines (using VMs [Virtual Machine] is really ideal for this tasks).

    I feel your pain, and I wish you the best of luck. Please report back here when you found out what was the issue.

    PJM

    • Like 2
  13. Unfortunately I have seen this a lot as well. I think I saw this first around LabVIEW 8.0 (for sure I never saw this in LV 7.1 or earlier). I don't have a fix, but I have a work around that sometimes help get better number.

    • Before your application run, start the "profiler"
    • Click on the snapshot button a couple of time (<- this is the trick that do sometimes help)
    • Start your app

    Hopefully this will be useful in your situation.

    PJM

  14. If you change the cRIO IP address using MAX, does your cRIO executable code stop working? If this is the case, you might want to consider the following (although at this time this might be too much work for your project).

    Instead of using "naked" share variable, use the share variable API (Function>>Data Communication>>Share Variable).

    post-121-0-17952200-1315415334.png

    By using this palette function you are able to use string "path" reference (NI Call this this string the "Share Variable reference In") to address the share variable.

    post-121-0-98986300-1315416573.png

    This string path reference look like this: ni.var.psp://IP Address/Mod#/IOName.

    So for instance:

    • For code running in the cRIO, the IP address will be 127.0.0.1. For code running in the laptop the IP address will be the cRIO IP [ex: 192.168.10.125].
    • Mod# is the module number (ex: Mod2).
    • IOName is the name that you gave that specific IO on that specific module (ex: External Interlock Status)

    Withe the above parameters:

    • in the cRIO the SV path reference is: ni.var.psp://127.0.0.1/Mod2/External Interlock Status
    • in the laptop this would be: ni.var.psp://192.168.10.125/Mod2/External Interlock Status

    If you do something like this, you can the store the cRIO share variable configuration string path reference in ini files (possibly one for the cRIO itself and one for the laptop).

    Now when you change your cRIO IP Address, you just need to change your IP address in your laptop ini file and everything will work fine.

    I am using this method on a shipping instrument that can have various IP address and this method works great.

    I Hope that this help.

    PJM

    Note: Now, the crazy/cool thing with that approach is depending on what is in your cRIO code, you could potentially run part (or all) of the cRIO code directly into the laptop by replacing the cRIO IP Address from 127.0.0.1 to its "real" one (ex: 192.168.10.15).

    • Like 1
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.