Jump to content

bbean

Members
  • Content Count

    245
  • Joined

  • Last visited

  • Days Won

    6

Posts posted by bbean

  1. 1 hour ago, Thoric said:

    Their roles are:

    1. One is Dialogue style GUI Editor for populating an Array - a relatively simple actor.
    2. Another is a Dialogue style GUI Editor for a more complex file management need. Manages remotely (FTP) stored files.
    3. A third Dialogue style GUI Editor for managing another Array - a relatively simple actor, like (1).
    4. A local file management actor for receiving configurations and collating them into a JSON file - a simple decoupled actor with the ability to gather JSON objects from any other actor for storage and retrieval.
    5. A simple actor that adds log entries to a local text file, and uploads the local file to a remote FTP folder.

    So they don't all have one common dependency that the other actors don't also use. The first three present their FPs, but so do many other actors. Some depend on FTP interaction, using a separate internal library we have, but not exclusively so.

    I think I'll try cloning one of these 5 actors, trial build it then strip out 1 feature, and repeat until it builds successfully to determine which component is the problem...

    This is a long shot here, but do they all need to run in the UI thread but their preferred execution system is set to "same as caller" and then something gets screwed up when they are called by dynamic dispatch in the runtime?  What happens if you set their preferred execution system to "user interface" and retry.

     

  2. agreed.  As part of my trials and tribulations with MAX I had to repair NI VISA.  With NIPM18.5? no option to do that so they recommend uninstalling and reinstalling...fair enough.  Tried that and NIPM wanted to uninstall LabVIEW..wtf.   Upgrading to the latest version of NIPM (19.6) provided a better experience allowing you to repair installs now.

  3. Has anyone used IPFS as a tool for storing and distributing test data (multiple gigabytes)?  My use case would be to run tests that store data on local windows machines and then distribute that to other users who may have linux, windows etc and also to a centralized archiving location.  The users and test machines are in a relatively strict network environment and most users machines are locked down.  Some of the linux users may have elevated privileges to install things like ipfs but I'm worried about a typical windows user who may want to get the files easily without having to go through a bunch of command line steps to install ipfs on there machine after requesting elevated privileges.

     

  4. 9 hours ago, hooovahh said:

    I don't have anything to add other than I've seen this but not in any recent projects I've worked on.  It always frustrated me when I'd drill into some VI that was taking forever and it would be some property node or Read/Write/Open that was supposed to have a small timeout.

    Thanks for the quick response.  its a rather annoying problem.

    7 hours ago, Rolf Kalbermatter said:

    Once VISA calls into the Windows USB driver it can only hope that this driver will return within a reasonable time. I

    7 hours ago, Rolf Kalbermatter said:

    Personally I guess some previous command you send hasn't been fully processed by the device, or you haven't read the entire response, or haven't reset a status flag in the device or something like that. These things all shouldn't be able to lock up the USB interface, but they can and sometimes do.

    I think this may have been the problem but wont know until I test again on Monday.  You are correct that its a PM320E.  And so far their driver has been a pain.  The suspect command is the error query the parent class implements by default "SYST:ERR?" but PM320E requires ":SYST:ERR?".  A nuance (nuisance) that I failed to notice.

    PS.  I think I may have borked the instrument up by upgrading the firmware too.  Oh well thats what you get for trying to get something done before a holiday weekend.

  5. I have an issue where reading the VISA Instr Property "Intf Type" of a USB Instrument hangs for about 40 secs:

    image.png.9ef44f592496f4f476e4283bf033758c.png

    followed by an asynchronous VISA Write hang for 2+ minutes!  The timeout on the VISA instr session is set to 1000ms.   Here are the other details of the session:

    image.png.0c236366c08bfbd8820d54a1e25b6be2.png

    and here's a snip of the VI:

    image.png.f9de46c68c64e8f869afdc453baee7a1.png

     

    Any idea why these long timeouts are occurring?    or why the 1000ms timeout is being violated for both the Instr property call (no idea what goes on under the hood here) or the VISA write. 

     

  6. 6 minutes ago, drjdpowell said:

    By default, subVI are set to "Same as Caller" execution system, but they can be a specific system instead.  I suspect it might be just the subVI that does the TestStand call that needs to be in a different Execution System, not the calling Actor.vi itself.  So try just changing the subVI.

    If that doesn't work you may have to separate the TestStandAPI calls out.  Are you using your TestStand Actor as a GUI or user interface?  If so you may have to create another Actor to separate out the TestStand API calls that are causing the log jam into a new Actor....That new actor should not have any property/invoke nodes which would force its VI into the UI thread.

    • Thanks 1
  7. I worked on some Ethercat issue a few years back and remember that at the time the cRIO doesn't support Beckhoff array datatypes and we had to make individual IO variables for each item in the array on the Beckhoff side.  Were you able to import the XML file OK into the LabVIEW project?

  8. If you don't need to do FPGA image processing, I would explore the other options for Camera Link cards that are not FPGA based and see if they will work with Pharlap

    With regards to the FPGA example, this may be a long shot If you haven't compiled FPGA code and I'm not sure it will work at all.  I don't have time right now to fully explain but to summarize:

    • Open the example 1477 getting started project
    • Save a copy of the project and all VIs to¬†a new location (so you don't overwrite the working windows target version from NI)
    • Close the off the shelf example project
    • Open the copy project
    • Create a new RT target in the project (right click on project in project tree, select new targets and devices, select RT desktop
    • Move the FPGA Target from the Windows Target to the RT Desktop target
    • Move the Host VI from the windows target to the RT target
    • Compile the FPGA target VI
    • Open the Host VI¬† (now in the realtime target) and reconfigure the Open FGPA reference to point to the new compiled FPGA VI.
  9.  

    What do you think of this solution?

    I guess I would need to know about your requirements, but I think that would be a road less traveled.  Do you need base, medium, full or extended full? do you need power over camera link? etc.   Why do you need real-time?

    In the future I would recommend talking with Robert Eastland and purchasing all your vision related hardware /software from Graftek.  He has been extremely helpful with me in the past and knows his stuff.  I have no affiliation with the company.

    Did you try my suggestion to compile the example FPGA code and move the host example to the real-time target to see if its even a possibility?

     

  10.  

    According to the specification:

    http://download.ni.com/support/softlib//vision/Vision Acquisition Software/18.5/readme_VAS.html

    NI-IMAQ I/O is driver software for controlling reconfigurable input/output (RIO) on image acquisition devices and real-time targets. The following hardware is supported by NI-IMAQ I/O:

    .......

    • NI PCIe-1473R
    • NI PCIe-1473R-LX110
    • NI PCIe-1477ÔĽŅ

     

    the frame grabbers should work under Labview Realtime. Do you see it that way?


    The statement of NI (Munich) is now (after the purchase) that the frame grabber PCIe 1477 should not work under Labview Realtime.

    I also had the impression that NI was not really interested in solving the problem. For those, Labview Realtime is an obsolete product.

    The question to NI (Munich) whether the frame grabber PCIe 1473r works under Labview Realtime has not been answered until today.

    Too bad that nobody else made experience with the frame grabbers under Labview Realtime.

    A nice week start

    Jim

    Unfortunately, the card probably does not work directly in LabVIEW Realtime.  NI's specifications and documentation are often vague with hidden gotchas.  I had a similar problem with an NI-serial card years ago when Real-time and FPGA first debuted.   I wanted to use the serial card directly in LabVIEW real-time with VISA, but I ended up having to code a serial FPGA program on the card because VISA did not recognize it as a serial port early on.

    Is there anyway you can try to compile the FPGA example and download it to the card?

    C:\Program Files (x86)\National Instruments\LabVIEW 2018\examples\Vision-RIO\PCIe-1477\PCIe-1477 Getting Started\PCIe-1477 Getting Started.lvproj

    After you compile and download the FPGA code to the 1477, I think you would have to move "PCIe-1477 Getting Started\Getting Started (Host).vi" from windows target to the Real-time target, open it up and see if it can be run.

     

  11.  

    The FPGA tool is also available. The frame grabber runs on the same PC under Windows 7 (other hard disk).
    Does anyone have an idea why the frame grabber is not recognized in realtime?

    a nice weekend

    Jim

    At least for the 1473R according to this:

    https://forums.ni.com/t5/Instrument-Control-GPIB-Serial/My-Basler-acA2040-180km-NIR-is-not-visible-in-NI-MAX/m-p/2402066/highlight/true#M59080

    "The NI PCIe-1473R Frame Grabber contains a reconfigurable FPGA in the image path enabling on-board image processing. This means that the full communication between the camera and the frame grabber goes through the FPGA. It is then a major difference comparing to the other standard frame grabber without FPGA. 

    "It means also that the camera will not shows up in Measurement & Automation Explorer."

    I'm guessing here but I think you have to create a new Flex RIO FPGA project with option for the card

    https://forums.ni.com/t5/Machine-Vision/PCIe-1473R-fpga-project/td-p/2123826

    Maybe look and see if you can compile an example from here

    https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z000000kIBdSAM&l=en-US

    ..\Program Files (x86)\National Instruments\LabVIEW 2018\examples\Vision-RIO\

     

  12. TCP is not free of pain either though.  I've been on networks where the IT network traffic monitors will automatically close TCP connections if no data flows across them EVEN if TCP keep alive packets flow across the connections.  For whatever reason the packet inspection policies effectively ignore keep alive packets as legitimate.  We ended up having to send NO-OP packets with some dummy data in them every 5 minutes or so if no "real data" was flowing.

  13.  

    So not sure how to do RUDPÔĽŅ.¬†

    You would have to create/send the packet header(s) as defined  by RUDP in each data packet in LabVIEW on pharlap side by placing it before the data you send.  Then you would have to send a response packet with the RUDP header(s) on the LabVIEW host side based on whether you received a packet out of sequence (or invalid checksum, etc).  You would effectively be creating your own slimmed down version of TCP at the LabVIEW application layer.  Quite a pain unless absolutely necessary.

  14. you could try installing pyvisa-py (partial replacement for ni-visa backend) on the rasberry-pi and see if it can implement remote sessions eg.  visa://hostname/ASRL1::INSTR .It doesn't look too promising based on this discussion,

    https://github.com/pyvisa/pyvisa-py/issues/165

    but it seems to indicate if you know the address and don't rely  on the pyvisa-py resource manager it may work.

     

     

     

     

     

  15. 6 hours ago, Benoit said:

    I think the biggest mistake from NI was to not add 20 years experienced user into their development team....... but no real user.

    Benoit

    This.

    I tested NXG for the first time at a feedback session during the CLA summit.  So I was learning NXG on the spot in front of one of the NXG developers.  When I would get stuck trying to figure it something out, the developer would ask how would you do that in legacy LabVIEW and I would tell him, then he would show me how to do it in NXG.  My understanding was that the NXG IDE was designed to make the number of programming number steps more "efficient".  Unfortunately this sometimes sacrifices the many years of muscle memory doing things in legacy LabVIEW.    A bad analogy would be brushing your teeth with the opposite hand because studies have shown that ambidextrous tooth brushers clean teeth slightly better.  It may be slightly better in theory but the pain of learning outweighs the benefits.   Some of the things I remember being slightly different (annoyingly):

    • Quickdrop functionality
    • Adding a terminal on the block diagram seemed more tedious and defaulted to not showing the Control/Indicator on the front panel¬†:frusty:¬†. WTF.

    While I'm sure the NXG team has received guidance/direction/development/feedback from very experienced insiders at NI, I walked away feeling like there was no way the internal NI experienced LabVIEW users were developing only with NXG on a daily basis by default.  Otherwise muscle memory things like quickdrop would work exactly like they did in legacy LabVIEW.  I think what needs to happen is Darren needs to un-retire from fastest LabVIEW competition and compete next May at NI-Week using NXG. 

    That said the NXG developers and team leads were very receptive to my feedback and seemed genuinely open to making changes.  Now whether that carries through to the end product or not we will see.  I also saw some new IDE features (new right click options for instance) that made me think that makes sense and I can see that helping speed up development once I get used to it.

    If and when I use NXG I would like to see a checkbox in the options that says "maintain legacy front panel, block diagram and keyboard shortcut behavior as much as possible"

     

    • Like 1
  16. On 12/8/2018 at 3:21 PM, Michael Aivaliotis said:

    From NIMax you can format a cRIO.

    Is there anyway to do this without MAX? or a description of what happens when MAX executes the format? 

    Unfortunately no Windows boxes are allowed in the previous mentioned "secure" area.   So the wipe needs to be done without MAX.   Once the cRIO is wiped it can leave the secure area and all normal NI stuff (MAX, RAD, windows) can be used.   As someone told me, its the security policy it doesn't have to make sense.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.