Jump to content

ned

Members
  • Posts

    571
  • Joined

  • Last visited

  • Days Won

    14

ned last won the day on November 19 2016

ned had the most liked content!

Profile Information

  • Gender
    Male
  • Location
    San Francisco, CA

LabVIEW Information

  • Version
    LabVIEW 2015
  • Since
    1999

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

ned's Achievements

  1. This is exactly right. A branch points at a commit, and the commit to which it points changes as additional commits are added to the branch. You can reset a branch to point at any arbitrary commit, even one in what you might think of as a different branch, because again, it's just a reference to a commit. Deleting a branch removes the reference with no impact on the sequence of commits. In case you want to make things more complicated, git supports an alternate mechanism to merging branches, where code is "rebased" rather than "merged" so the eventual history looks like a single continuous line rather than a tree.
  2. Thanks for the notes! String to byte array wasn't an option because I needed to use a 32-bit wide FIFO to get sufficiently fast transfers (my testing indicated that DMA transfers were roughly constant in elements/second regardless of the element size, so using a byte array would have cut throughput by 75%). I posted about this at the time https://forums.ni.com/t5/LabVIEW/optimize-transfer-from-TCP-Read-to-DMA-Write-on-sbRIO/td-p/2622479 but 7 years (and 3 job transfers) later I'm no longer in a position to experiment with it. I like the idea of implementing type cast without a copy as a learning experience; I think the C version would be straightforward and pure LabVIEW (with calls to memory manager functions) would be an interesting challenge.
  3. I really wanted to use that function a few years back, but it wasn't available on the cRIO I was using. In case it's helpful, here's the situation in which I'd hoped to use it: We were using a cRIO to drive an inkjet print head. The host system downloaded a bitmap to the cRIO, which the cRIO then sent to the FPGA over a DMA FIFO. I used a huge host-side buffer, large enough to store the entire bitmap; the FPGA read data from that FIFO as needed. I benchmarked this and it required 3 copies of the entire bitmap, which could be several megabytes: one copy when initially downloaded; one copy for the conversion from string (from TCP Read) to numeric array (for the FIFO Write); and one copy in the FIFO buffer. These memory copies were one of the limiting factors in the speed of the overall system (the other was how fast we could load data to the print head). If I had been able to use "Acquire Write Region" I could have saved one copy, because the typecast from string to numeric array could have written directly to the FIFO buffer. If there were some way to do the string to numeric array conversion in-place maybe I could have avoided that copy too.
  4. I posted an Idea Exchange suggestion years ago that NI provide a warning about front panel property node use when building an RT executable, but sadly all we got was a VI analyzer test, which I assume you didn't run. If anyone has any pull with NI to make the build process warn about this potential problem, please try to make it happen. https://forums.ni.com/t5/LabVIEW-Real-Time-Idea-Exchange/Warn-about-front-panel-property-node-use-when-building-RT/idi-p/1702046
  5. Cross-posted here: https://forums.ni.com/t5/LabVIEW/Complex-cluster-array-with-static-size-in-calling-dll/m-p/3741685/
  6. You can turn Auto-Grow on for that loop. Then select any item in the structure and use the arrow keys on the keyboard to push that item past the edge of the structure to make it grow. Not quite the same thing, but maybe close enough?
  7. You definitely cannot use a .NET library on a cRIO. .NET is Windows-only.
  8. The most common way that the FPGA buffer fills is when the host side either hasn't been started or hasn't been read, but at very high data rates a buffer that's too small could do it too. The DMA engine is fast, but I can't quantify that. Transfer rates vary depending on the hardware and possibly also on the direction of transfer (on older boards, transfers from the FPGA to the host were much faster than the other direction, that may no longer be true with newer hardware). Is there jitter? You could measure it... make a simple FPGA VI that fills a DMA buffer continously at a fixed rate. On the RT side, read, say, 6000 elements at a time in a loop, measure how long each loop iteration takes and how much variation there is. As for buffer sizing, there's no optimal ratio, it depends on details of your application and what else needs memory in your FPGA design. I'm having trouble following your description of events and what data is transferred, but if you can provide a clearer explanation or some sample code I'll try to provide suggestions. Why are you using TDMS files to transfer data? Using a file transfer for raw data is probably not the most efficient approach. As for the DMA transfer, inserting magic values in the middle of the stream and then scanning the entire stream for them doesn't sound too efficient, there's probably a better approach there. Perhaps it's possible to use a second FIFO that simply tells the RT how many elements to read from the main data FIFO? That is, you write however many points are necessary for one event into the main data FIFO, and when that's complete, you write element count to another FIFO. Then, on the host, you read the second FIFO and then read that number of elements from the first FIFO. That may not match what you're doing at all - as I said, I don't quite understand your data transfers - but maybe it helps think about alternate approaches.
  9. First, one terminology note: NI uses "host" to mean the system that hosts the FPGA, in this case your RT system. RT and Host are synonymous here. It sounds like you have a misunderstanding about the DMA FIFO. The FIFO has two buffers: one on the host (RT) and the other on the FPGA. The host-side buffer can be many times larger than the buffer on the FPGA. The DMA logic automatically transfers data from the FPGA buffer to the host buffer whenever the FGPA buffer fills, or at regular intervals. If you're finding that you're filling the DMA FIFO, make the host-side buffer larger (you can do this through an FPGA method on the RT side) and read larger chunks. I would take out the RT FIFO entirely here and read directly from the DMA FIFO, although probably in a normal loop rather than a timed one since it sounds like your timing is event-based. I don't fully understand your parsing scheme with the special characters; if you can share some code, it might be possible to provide specific suggestions. Have you considered using multiple DMA FIFOs to separate out different streams, or you need them all combined into a single channel?
  10. Have you checked what methods and properties are available for the .NET class that's returned by GetActive? Are you sure that the ToInt64 method returns a memory address? How did you configure the MoveBlock call? Make sure that you configured the src parameter as a pointer-sized integer passed by value. If ToInt64 does what you indicate it should do, then the value on the wire is the memory address and should be passed directly (by value); if you instead pass a pointer to that pointer, you'll get garbage.
  11. The line at the bottom isn't a function prototype; I'm not quite sure what it's supposed to be doing. Can you include more of the header file? Data Format should be set to Array Data Pointer. You should never pass a LabVIEW handle to a DLL unless that DLL is specifically written to handle LabVIEW data types. There's no need to take an empty string, convert it to an array, then type cast that to a cluster; bundle in the cluster constant directly. You could also ignore the structure entirely, compute the size of the cluster ((256 + 4) * 256) = 66560 bytes, initialize an array of that size, and pass that instead. Make sure to pass it as an array data pointer. You can then parse out the returned array however you like (accessing specific subsets, or attempting to convert it to the big cluster).
  12. We're trying to acquire several more PXI 4130 SMUs, preferably inexpensively. I'm sure we're not the only company with lots of spare NI hardware around (but, of course, not those modules). Anyone have some of those modules they no longer need and would be able to sell to us? Any recommendations, other than eBay, on where to look for used NI hardware?
  13. My (least) favorite conflict with IT: we had a site-wide, network-based solution for printing product labels, which was accessed through Internet Explorer. We wanted some in-house software (that I was writing, in LabVIEW) to talk to that system, but we weren't willing to pay the exorbitant vendor fee for the SDK, so I dropped IE into an ActiveX container on the LabVIEW front panel and got everything working happily. Later, IT wanted to upgrade the version of Internet Explorer, and needed to confirm that it wouldn't cause problems for any existing applications. I mentioned that several people were using this tool I wrote in LabVIEW, and was promptly told that there was a strict policy against an application accessing any company database without prior permission from IT. My boss had to schedule a meeting that involved his boss, the local IT person, and someone from corporate IT, to explain that my code only took the same actions that any authorized user could take by clicking buttons within the label-printing site, and that the program did not directly access any corporate database.
  14. Apparently we're in the minority here - we're using Mercurial (TortoiseHg). It was set up before I started, it wouldn't have been my first choice. We now have most of our code in a single repository which has grown unmanageable so we're looking at other solutions, perhaps we should consider moving to SVN (based on responses here) while we're adjusting our repository structure. I like the idea of Mercurial, but delta-based versioning just doesn't work that well for binary files. I've used SVN a bit but most of my previous experience is with Perforce and that would be my first choice if I got to pick (and cost wasn't an issue).
  15. For whatever it's worth, we haven't even been able to get multiple executables that use network streams, each talking to a separate cRIO, to run in parallel. Only the first one connects, and the rest don't. Coworkers investigated this before I joined the company and finally gave up; I haven't dug into it further. Instead we build a single executable that launches all the separate user interfaces, which is slightly annoying since a rebuild of any one interface requires rebuilding all of them, but it works. We recently replaced network streams with TCP communication, for other reasons, but one side effect is that we no longer have this issue. Since we're seeing this problem even when each user interface talks to a different cRIO, I don't think any amount of changing the endpoint names will fix your problem. Seems like there's some lower-level issue with access to network streams.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.