Jump to content
News about the LabVIEW Wiki! Read more... ×

ned

Members
  • Content Count

    568
  • Joined

  • Last visited

  • Days Won

    14

ned last won the day on November 19 2016

ned had the most liked content!

Community Reputation

57

About ned

  • Rank
    The 500 club
  • Birthday 04/05/1980

Profile Information

  • Gender
    Male
  • Location
    Hayward, CA

LabVIEW Information

  • Version
    LabVIEW 2015
  • Since
    1999

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. ned

    Weird cRIO Behaviour with DVRs

    I posted an Idea Exchange suggestion years ago that NI provide a warning about front panel property node use when building an RT executable, but sadly all we got was a VI analyzer test, which I assume you didn't run. If anyone has any pull with NI to make the build process warn about this potential problem, please try to make it happen. https://forums.ni.com/t5/LabVIEW-Real-Time-Idea-Exchange/Warn-about-front-panel-property-node-use-when-building-RT/idi-p/1702046
  2. Cross-posted here: https://forums.ni.com/t5/LabVIEW/Complex-cluster-array-with-static-size-in-calling-dll/m-p/3741685/
  3. ned

    Using grow/shrink function on single structure

    You can turn Auto-Grow on for that loop. Then select any item in the structure and use the arrow keys on the keyboard to push that item past the edge of the structure to make it grow. Not quite the same thing, but maybe close enough?
  4. You definitely cannot use a .NET library on a cRIO. .NET is Windows-only.
  5. The most common way that the FPGA buffer fills is when the host side either hasn't been started or hasn't been read, but at very high data rates a buffer that's too small could do it too. The DMA engine is fast, but I can't quantify that. Transfer rates vary depending on the hardware and possibly also on the direction of transfer (on older boards, transfers from the FPGA to the host were much faster than the other direction, that may no longer be true with newer hardware). Is there jitter? You could measure it... make a simple FPGA VI that fills a DMA buffer continously at a fixed rate. On the RT side, read, say, 6000 elements at a time in a loop, measure how long each loop iteration takes and how much variation there is. As for buffer sizing, there's no optimal ratio, it depends on details of your application and what else needs memory in your FPGA design. I'm having trouble following your description of events and what data is transferred, but if you can provide a clearer explanation or some sample code I'll try to provide suggestions. Why are you using TDMS files to transfer data? Using a file transfer for raw data is probably not the most efficient approach. As for the DMA transfer, inserting magic values in the middle of the stream and then scanning the entire stream for them doesn't sound too efficient, there's probably a better approach there. Perhaps it's possible to use a second FIFO that simply tells the RT how many elements to read from the main data FIFO? That is, you write however many points are necessary for one event into the main data FIFO, and when that's complete, you write element count to another FIFO. Then, on the host, you read the second FIFO and then read that number of elements from the first FIFO. That may not match what you're doing at all - as I said, I don't quite understand your data transfers - but maybe it helps think about alternate approaches.
  6. First, one terminology note: NI uses "host" to mean the system that hosts the FPGA, in this case your RT system. RT and Host are synonymous here. It sounds like you have a misunderstanding about the DMA FIFO. The FIFO has two buffers: one on the host (RT) and the other on the FPGA. The host-side buffer can be many times larger than the buffer on the FPGA. The DMA logic automatically transfers data from the FPGA buffer to the host buffer whenever the FGPA buffer fills, or at regular intervals. If you're finding that you're filling the DMA FIFO, make the host-side buffer larger (you can do this through an FPGA method on the RT side) and read larger chunks. I would take out the RT FIFO entirely here and read directly from the DMA FIFO, although probably in a normal loop rather than a timed one since it sounds like your timing is event-based. I don't fully understand your parsing scheme with the special characters; if you can share some code, it might be possible to provide specific suggestions. Have you considered using multiple DMA FIFOs to separate out different streams, or you need them all combined into a single channel?
  7. Have you checked what methods and properties are available for the .NET class that's returned by GetActive? Are you sure that the ToInt64 method returns a memory address? How did you configure the MoveBlock call? Make sure that you configured the src parameter as a pointer-sized integer passed by value. If ToInt64 does what you indicate it should do, then the value on the wire is the memory address and should be passed directly (by value); if you instead pass a pointer to that pointer, you'll get garbage.
  8. The line at the bottom isn't a function prototype; I'm not quite sure what it's supposed to be doing. Can you include more of the header file? Data Format should be set to Array Data Pointer. You should never pass a LabVIEW handle to a DLL unless that DLL is specifically written to handle LabVIEW data types. There's no need to take an empty string, convert it to an array, then type cast that to a cluster; bundle in the cluster constant directly. You could also ignore the structure entirely, compute the size of the cluster ((256 + 4) * 256) = 66560 bytes, initialize an array of that size, and pass that instead. Make sure to pass it as an array data pointer. You can then parse out the returned array however you like (accessing specific subsets, or attempting to convert it to the big cluster).
  9. We're trying to acquire several more PXI 4130 SMUs, preferably inexpensively. I'm sure we're not the only company with lots of spare NI hardware around (but, of course, not those modules). Anyone have some of those modules they no longer need and would be able to sell to us? Any recommendations, other than eBay, on where to look for used NI hardware?
  10. ned

    Odd IT and Corporate Policies

    My (least) favorite conflict with IT: we had a site-wide, network-based solution for printing product labels, which was accessed through Internet Explorer. We wanted some in-house software (that I was writing, in LabVIEW) to talk to that system, but we weren't willing to pay the exorbitant vendor fee for the SDK, so I dropped IE into an ActiveX container on the LabVIEW front panel and got everything working happily. Later, IT wanted to upgrade the version of Internet Explorer, and needed to confirm that it wouldn't cause problems for any existing applications. I mentioned that several people were using this tool I wrote in LabVIEW, and was promptly told that there was a strict policy against an application accessing any company database without prior permission from IT. My boss had to schedule a meeting that involved his boss, the local IT person, and someone from corporate IT, to explain that my code only took the same actions that any authorized user could take by clicking buttons within the label-printing site, and that the program did not directly access any corporate database.
  11. ned

    SCC Role Call

    Apparently we're in the minority here - we're using Mercurial (TortoiseHg). It was set up before I started, it wouldn't have been my first choice. We now have most of our code in a single repository which has grown unmanageable so we're looking at other solutions, perhaps we should consider moving to SVN (based on responses here) while we're adjusting our repository structure. I like the idea of Mercurial, but delta-based versioning just doesn't work that well for binary files. I've used SVN a bit but most of my previous experience is with Perforce and that would be my first choice if I got to pick (and cost wasn't an issue).
  12. For whatever it's worth, we haven't even been able to get multiple executables that use network streams, each talking to a separate cRIO, to run in parallel. Only the first one connects, and the rest don't. Coworkers investigated this before I joined the company and finally gave up; I haven't dug into it further. Instead we build a single executable that launches all the separate user interfaces, which is slightly annoying since a rebuild of any one interface requires rebuilding all of them, but it works. We recently replaced network streams with TCP communication, for other reasons, but one side effect is that we no longer have this issue. Since we're seeing this problem even when each user interface talks to a different cRIO, I don't think any amount of changing the endpoint names will fix your problem. Seems like there's some lower-level issue with access to network streams.
  13. ned

    DLL with Bundle input crashes

    MgErr is defined in "extcode.h" found in the "cintools" directory within your LabVIEW installation. When you create a C file from a Call Library Function Node, you'll notice it includes that file - it includes definitions of standard LabVIEW data types to avoid any confusion over things such as the size of an integer.
  14. ned

    DLL with Bundle input crashes

    I duplicated your code, including building a DLL, and now I can duplicate the crash as well. My apologies for leading you partly in the wrong direction. I see that despite the generated C file, the array is not getting passed as a simple pointer, unlike what I expected. With some help from this thread, I've confirmed that although Create C File generates different function prototypes depending on how you pass the cluster parameter, LabVIEW in fact still passes the parameter the same way. The following code, based on Create C File with the parameter passed as Handle by Value, works for me (even if when you actually call the function it's configured to pass the parameter as an Array Data Pointer): #pragma pack(push,1) typedef struct { int dimSize; double elt[1]; } TD2; typedef TD2 **TD2Hdl; typedef struct { TD2Hdl elt1; } TD1; #pragma pack(pop) extern "C" __declspec(dllexport) void pointertest(TD1 *arg1); void pointertest(TD1 *arg1) { (*arg1->elt1)->elt[0] = 3.1; (*arg1->elt1)->elt[1] = 4.2; } Note that you should not set the dimSize element directly in your C code; if you need to resize the array, use NumericArrayResize as I mentioned before.
  15. ned

    DLL with Bundle input crashes

    Can you attach a ZIP archive containing all the pieces of your project - the code for the shared library, the compiled DLL, and the LabVIEW VIs? In the Call Library Function Node configuration, try changing the Error Checking level to Maximum, and see if it gives you any helpful error messages.
×

Important Information

By using this site, you agree to our Terms of Use.