Jump to content

ned

Members
  • Content Count

    570
  • Joined

  • Last visited

  • Days Won

    14

Everything posted by ned

  1. Thanks for the notes! String to byte array wasn't an option because I needed to use a 32-bit wide FIFO to get sufficiently fast transfers (my testing indicated that DMA transfers were roughly constant in elements/second regardless of the element size, so using a byte array would have cut throughput by 75%). I posted about this at the time https://forums.ni.com/t5/LabVIEW/optimize-transfer-from-TCP-Read-to-DMA-Write-on-sbRIO/td-p/2622479 but 7 years (and 3 job transfers) later I'm no longer in a position to experiment with it. I like the idea of implementing type cast without a copy as a learni
  2. I really wanted to use that function a few years back, but it wasn't available on the cRIO I was using. In case it's helpful, here's the situation in which I'd hoped to use it: We were using a cRIO to drive an inkjet print head. The host system downloaded a bitmap to the cRIO, which the cRIO then sent to the FPGA over a DMA FIFO. I used a huge host-side buffer, large enough to store the entire bitmap; the FPGA read data from that FIFO as needed. I benchmarked this and it required 3 copies of the entire bitmap, which could be several megabytes: one copy when initially downloaded; one copy for
  3. I posted an Idea Exchange suggestion years ago that NI provide a warning about front panel property node use when building an RT executable, but sadly all we got was a VI analyzer test, which I assume you didn't run. If anyone has any pull with NI to make the build process warn about this potential problem, please try to make it happen. https://forums.ni.com/t5/LabVIEW-Real-Time-Idea-Exchange/Warn-about-front-panel-property-node-use-when-building-RT/idi-p/1702046
  4. Cross-posted here: https://forums.ni.com/t5/LabVIEW/Complex-cluster-array-with-static-size-in-calling-dll/m-p/3741685/
  5. You can turn Auto-Grow on for that loop. Then select any item in the structure and use the arrow keys on the keyboard to push that item past the edge of the structure to make it grow. Not quite the same thing, but maybe close enough?
  6. You definitely cannot use a .NET library on a cRIO. .NET is Windows-only.
  7. The most common way that the FPGA buffer fills is when the host side either hasn't been started or hasn't been read, but at very high data rates a buffer that's too small could do it too. The DMA engine is fast, but I can't quantify that. Transfer rates vary depending on the hardware and possibly also on the direction of transfer (on older boards, transfers from the FPGA to the host were much faster than the other direction, that may no longer be true with newer hardware). Is there jitter? You could measure it... make a simple FPGA VI that fills a DMA buffer continously at a fixed rate. On the
  8. First, one terminology note: NI uses "host" to mean the system that hosts the FPGA, in this case your RT system. RT and Host are synonymous here. It sounds like you have a misunderstanding about the DMA FIFO. The FIFO has two buffers: one on the host (RT) and the other on the FPGA. The host-side buffer can be many times larger than the buffer on the FPGA. The DMA logic automatically transfers data from the FPGA buffer to the host buffer whenever the FGPA buffer fills, or at regular intervals. If you're finding that you're filling the DMA FIFO, make the host-side buffer larger (you can
  9. Have you checked what methods and properties are available for the .NET class that's returned by GetActive? Are you sure that the ToInt64 method returns a memory address? How did you configure the MoveBlock call? Make sure that you configured the src parameter as a pointer-sized integer passed by value. If ToInt64 does what you indicate it should do, then the value on the wire is the memory address and should be passed directly (by value); if you instead pass a pointer to that pointer, you'll get garbage.
  10. The line at the bottom isn't a function prototype; I'm not quite sure what it's supposed to be doing. Can you include more of the header file? Data Format should be set to Array Data Pointer. You should never pass a LabVIEW handle to a DLL unless that DLL is specifically written to handle LabVIEW data types. There's no need to take an empty string, convert it to an array, then type cast that to a cluster; bundle in the cluster constant directly. You could also ignore the structure entirely, compute the size of the cluster ((256 + 4) * 256) = 66560 bytes, initialize an array of that s
  11. We're trying to acquire several more PXI 4130 SMUs, preferably inexpensively. I'm sure we're not the only company with lots of spare NI hardware around (but, of course, not those modules). Anyone have some of those modules they no longer need and would be able to sell to us? Any recommendations, other than eBay, on where to look for used NI hardware?
  12. My (least) favorite conflict with IT: we had a site-wide, network-based solution for printing product labels, which was accessed through Internet Explorer. We wanted some in-house software (that I was writing, in LabVIEW) to talk to that system, but we weren't willing to pay the exorbitant vendor fee for the SDK, so I dropped IE into an ActiveX container on the LabVIEW front panel and got everything working happily. Later, IT wanted to upgrade the version of Internet Explorer, and needed to confirm that it wouldn't cause problems for any existing applications. I mentioned that several peo
  13. Apparently we're in the minority here - we're using Mercurial (TortoiseHg). It was set up before I started, it wouldn't have been my first choice. We now have most of our code in a single repository which has grown unmanageable so we're looking at other solutions, perhaps we should consider moving to SVN (based on responses here) while we're adjusting our repository structure. I like the idea of Mercurial, but delta-based versioning just doesn't work that well for binary files. I've used SVN a bit but most of my previous experience is with Perforce and that would be my first choice if I got to
  14. For whatever it's worth, we haven't even been able to get multiple executables that use network streams, each talking to a separate cRIO, to run in parallel. Only the first one connects, and the rest don't. Coworkers investigated this before I joined the company and finally gave up; I haven't dug into it further. Instead we build a single executable that launches all the separate user interfaces, which is slightly annoying since a rebuild of any one interface requires rebuilding all of them, but it works. We recently replaced network streams with TCP communication, for other reasons, but one s
  15. MgErr is defined in "extcode.h" found in the "cintools" directory within your LabVIEW installation. When you create a C file from a Call Library Function Node, you'll notice it includes that file - it includes definitions of standard LabVIEW data types to avoid any confusion over things such as the size of an integer.
  16. I duplicated your code, including building a DLL, and now I can duplicate the crash as well. My apologies for leading you partly in the wrong direction. I see that despite the generated C file, the array is not getting passed as a simple pointer, unlike what I expected. With some help from this thread, I've confirmed that although Create C File generates different function prototypes depending on how you pass the cluster parameter, LabVIEW in fact still passes the parameter the same way. The following code, based on Create C File with the parameter passed as Handle by Value, works for me (even
  17. Can you attach a ZIP archive containing all the pieces of your project - the code for the shared library, the compiled DLL, and the LabVIEW VIs? In the Call Library Function Node configuration, try changing the Error Checking level to Maximum, and see if it gives you any helpful error messages.
  18. Have you tried right-clicking on the Call Library Function Node and choosing "Create C File"? That will give you a starting point with the correct function prototype. It appears that since your cluster contains only one element, the cluster is ignored, and you're passing only the array. You can see this by creating a C file from the existing configuration, then wiring the array directly without the bundle and creating a C file - the resulting C prototype is the same. Since you have configured the Call Library Function Node to pass the array parameter as an array data pointer, the value passed
  19. Again, we're not talking about a non-strictly-typed language here. All the type checking can be done at compile time. Have you done any programming in F# or another ML derivative? I've mostly seen this in those functional languages. For example, generics allow you to write a sorting algorithm that takes a list, and a comparison function that operates on elements of that list. The compiler verifies that the comparison function matches the list element type, so you can't have a run-time type mismatch. Right now you can't do this in LabVIEW. You can either have your list contain variants, in whic
  20. Your description doesn't make much sense relative to what your code is doing. Have you tried probing the wires going to the graph so you can see what you're doing? What about replacing the analog input value with a front panel control so you can test and simulate the behavior easily?
  21. You can handle a Key Down? filter event for that input box, and check if the key was Tab. A filter event allows you to change parameters of the event before it's handled, so you can change the Tab to Enter, then allow the event to process normally. Or, slightly more complicated, you can discard the event if Tab is pressed, and change the focus to the next item (if you want to be extra clever, also handle the shift-tab case for going to the previous item).
  22. Why would this require 2 TCP ports? TCP is full-duplex over a single port.
  23. Again, can you show your client code? Which TCP Read mode are you using? Is there any chance you're doing a zero-length TCP Read on many loop iterations? If so, I think it will return immediately with no error (and no data), which could lead to something like the situation you describe.
  24. You've misunderstood something about TCP communication: you will almost never see a timeout on a TCP write. You can't use it for throttling. It sounds like you're hoping that the TCP Write will only succeed if the receiving side is currently waiting on a TCP Read, but that's not how it works. The operating system receives and buffers incoming messages. If you call TCP Read and there's enough data in the buffer, you'll get that data immediately. If there isn't enough data in the buffer, then TCP Read waits for either enough data to arrive or for the timeout period to elapse. If you want to send
  25. No. If you do not check "Enable Debugging" in the build specification, and do not explicitly uncheck the "Remove Block Diagram" option for a VI within the build specification, then the block diagram is removed during the build process and cannot be recovered from the executable. Setting the INI file option might allow the user to connect with the debugger, but they will not actually be able to debug anything because the block diagrams aren't there.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.