Jump to content

ned

Members
  • Posts

    571
  • Joined

  • Last visited

  • Days Won

    14

Everything posted by ned

  1. This is exactly right. A branch points at a commit, and the commit to which it points changes as additional commits are added to the branch. You can reset a branch to point at any arbitrary commit, even one in what you might think of as a different branch, because again, it's just a reference to a commit. Deleting a branch removes the reference with no impact on the sequence of commits. In case you want to make things more complicated, git supports an alternate mechanism to merging branches, where code is "rebased" rather than "merged" so the eventual history looks like a single continuous line rather than a tree.
  2. Thanks for the notes! String to byte array wasn't an option because I needed to use a 32-bit wide FIFO to get sufficiently fast transfers (my testing indicated that DMA transfers were roughly constant in elements/second regardless of the element size, so using a byte array would have cut throughput by 75%). I posted about this at the time https://forums.ni.com/t5/LabVIEW/optimize-transfer-from-TCP-Read-to-DMA-Write-on-sbRIO/td-p/2622479 but 7 years (and 3 job transfers) later I'm no longer in a position to experiment with it. I like the idea of implementing type cast without a copy as a learning experience; I think the C version would be straightforward and pure LabVIEW (with calls to memory manager functions) would be an interesting challenge.
  3. I really wanted to use that function a few years back, but it wasn't available on the cRIO I was using. In case it's helpful, here's the situation in which I'd hoped to use it: We were using a cRIO to drive an inkjet print head. The host system downloaded a bitmap to the cRIO, which the cRIO then sent to the FPGA over a DMA FIFO. I used a huge host-side buffer, large enough to store the entire bitmap; the FPGA read data from that FIFO as needed. I benchmarked this and it required 3 copies of the entire bitmap, which could be several megabytes: one copy when initially downloaded; one copy for the conversion from string (from TCP Read) to numeric array (for the FIFO Write); and one copy in the FIFO buffer. These memory copies were one of the limiting factors in the speed of the overall system (the other was how fast we could load data to the print head). If I had been able to use "Acquire Write Region" I could have saved one copy, because the typecast from string to numeric array could have written directly to the FIFO buffer. If there were some way to do the string to numeric array conversion in-place maybe I could have avoided that copy too.
  4. I posted an Idea Exchange suggestion years ago that NI provide a warning about front panel property node use when building an RT executable, but sadly all we got was a VI analyzer test, which I assume you didn't run. If anyone has any pull with NI to make the build process warn about this potential problem, please try to make it happen. https://forums.ni.com/t5/LabVIEW-Real-Time-Idea-Exchange/Warn-about-front-panel-property-node-use-when-building-RT/idi-p/1702046
  5. Cross-posted here: https://forums.ni.com/t5/LabVIEW/Complex-cluster-array-with-static-size-in-calling-dll/m-p/3741685/
  6. You can turn Auto-Grow on for that loop. Then select any item in the structure and use the arrow keys on the keyboard to push that item past the edge of the structure to make it grow. Not quite the same thing, but maybe close enough?
  7. You definitely cannot use a .NET library on a cRIO. .NET is Windows-only.
  8. The most common way that the FPGA buffer fills is when the host side either hasn't been started or hasn't been read, but at very high data rates a buffer that's too small could do it too. The DMA engine is fast, but I can't quantify that. Transfer rates vary depending on the hardware and possibly also on the direction of transfer (on older boards, transfers from the FPGA to the host were much faster than the other direction, that may no longer be true with newer hardware). Is there jitter? You could measure it... make a simple FPGA VI that fills a DMA buffer continously at a fixed rate. On the RT side, read, say, 6000 elements at a time in a loop, measure how long each loop iteration takes and how much variation there is. As for buffer sizing, there's no optimal ratio, it depends on details of your application and what else needs memory in your FPGA design. I'm having trouble following your description of events and what data is transferred, but if you can provide a clearer explanation or some sample code I'll try to provide suggestions. Why are you using TDMS files to transfer data? Using a file transfer for raw data is probably not the most efficient approach. As for the DMA transfer, inserting magic values in the middle of the stream and then scanning the entire stream for them doesn't sound too efficient, there's probably a better approach there. Perhaps it's possible to use a second FIFO that simply tells the RT how many elements to read from the main data FIFO? That is, you write however many points are necessary for one event into the main data FIFO, and when that's complete, you write element count to another FIFO. Then, on the host, you read the second FIFO and then read that number of elements from the first FIFO. That may not match what you're doing at all - as I said, I don't quite understand your data transfers - but maybe it helps think about alternate approaches.
  9. First, one terminology note: NI uses "host" to mean the system that hosts the FPGA, in this case your RT system. RT and Host are synonymous here. It sounds like you have a misunderstanding about the DMA FIFO. The FIFO has two buffers: one on the host (RT) and the other on the FPGA. The host-side buffer can be many times larger than the buffer on the FPGA. The DMA logic automatically transfers data from the FPGA buffer to the host buffer whenever the FGPA buffer fills, or at regular intervals. If you're finding that you're filling the DMA FIFO, make the host-side buffer larger (you can do this through an FPGA method on the RT side) and read larger chunks. I would take out the RT FIFO entirely here and read directly from the DMA FIFO, although probably in a normal loop rather than a timed one since it sounds like your timing is event-based. I don't fully understand your parsing scheme with the special characters; if you can share some code, it might be possible to provide specific suggestions. Have you considered using multiple DMA FIFOs to separate out different streams, or you need them all combined into a single channel?
  10. Have you checked what methods and properties are available for the .NET class that's returned by GetActive? Are you sure that the ToInt64 method returns a memory address? How did you configure the MoveBlock call? Make sure that you configured the src parameter as a pointer-sized integer passed by value. If ToInt64 does what you indicate it should do, then the value on the wire is the memory address and should be passed directly (by value); if you instead pass a pointer to that pointer, you'll get garbage.
  11. The line at the bottom isn't a function prototype; I'm not quite sure what it's supposed to be doing. Can you include more of the header file? Data Format should be set to Array Data Pointer. You should never pass a LabVIEW handle to a DLL unless that DLL is specifically written to handle LabVIEW data types. There's no need to take an empty string, convert it to an array, then type cast that to a cluster; bundle in the cluster constant directly. You could also ignore the structure entirely, compute the size of the cluster ((256 + 4) * 256) = 66560 bytes, initialize an array of that size, and pass that instead. Make sure to pass it as an array data pointer. You can then parse out the returned array however you like (accessing specific subsets, or attempting to convert it to the big cluster).
  12. We're trying to acquire several more PXI 4130 SMUs, preferably inexpensively. I'm sure we're not the only company with lots of spare NI hardware around (but, of course, not those modules). Anyone have some of those modules they no longer need and would be able to sell to us? Any recommendations, other than eBay, on where to look for used NI hardware?
  13. My (least) favorite conflict with IT: we had a site-wide, network-based solution for printing product labels, which was accessed through Internet Explorer. We wanted some in-house software (that I was writing, in LabVIEW) to talk to that system, but we weren't willing to pay the exorbitant vendor fee for the SDK, so I dropped IE into an ActiveX container on the LabVIEW front panel and got everything working happily. Later, IT wanted to upgrade the version of Internet Explorer, and needed to confirm that it wouldn't cause problems for any existing applications. I mentioned that several people were using this tool I wrote in LabVIEW, and was promptly told that there was a strict policy against an application accessing any company database without prior permission from IT. My boss had to schedule a meeting that involved his boss, the local IT person, and someone from corporate IT, to explain that my code only took the same actions that any authorized user could take by clicking buttons within the label-printing site, and that the program did not directly access any corporate database.
  14. Apparently we're in the minority here - we're using Mercurial (TortoiseHg). It was set up before I started, it wouldn't have been my first choice. We now have most of our code in a single repository which has grown unmanageable so we're looking at other solutions, perhaps we should consider moving to SVN (based on responses here) while we're adjusting our repository structure. I like the idea of Mercurial, but delta-based versioning just doesn't work that well for binary files. I've used SVN a bit but most of my previous experience is with Perforce and that would be my first choice if I got to pick (and cost wasn't an issue).
  15. For whatever it's worth, we haven't even been able to get multiple executables that use network streams, each talking to a separate cRIO, to run in parallel. Only the first one connects, and the rest don't. Coworkers investigated this before I joined the company and finally gave up; I haven't dug into it further. Instead we build a single executable that launches all the separate user interfaces, which is slightly annoying since a rebuild of any one interface requires rebuilding all of them, but it works. We recently replaced network streams with TCP communication, for other reasons, but one side effect is that we no longer have this issue. Since we're seeing this problem even when each user interface talks to a different cRIO, I don't think any amount of changing the endpoint names will fix your problem. Seems like there's some lower-level issue with access to network streams.
  16. MgErr is defined in "extcode.h" found in the "cintools" directory within your LabVIEW installation. When you create a C file from a Call Library Function Node, you'll notice it includes that file - it includes definitions of standard LabVIEW data types to avoid any confusion over things such as the size of an integer.
  17. I duplicated your code, including building a DLL, and now I can duplicate the crash as well. My apologies for leading you partly in the wrong direction. I see that despite the generated C file, the array is not getting passed as a simple pointer, unlike what I expected. With some help from this thread, I've confirmed that although Create C File generates different function prototypes depending on how you pass the cluster parameter, LabVIEW in fact still passes the parameter the same way. The following code, based on Create C File with the parameter passed as Handle by Value, works for me (even if when you actually call the function it's configured to pass the parameter as an Array Data Pointer): #pragma pack(push,1) typedef struct { int dimSize; double elt[1]; } TD2; typedef TD2 **TD2Hdl; typedef struct { TD2Hdl elt1; } TD1; #pragma pack(pop) extern "C" __declspec(dllexport) void pointertest(TD1 *arg1); void pointertest(TD1 *arg1) { (*arg1->elt1)->elt[0] = 3.1; (*arg1->elt1)->elt[1] = 4.2; } Note that you should not set the dimSize element directly in your C code; if you need to resize the array, use NumericArrayResize as I mentioned before.
  18. Can you attach a ZIP archive containing all the pieces of your project - the code for the shared library, the compiled DLL, and the LabVIEW VIs? In the Call Library Function Node configuration, try changing the Error Checking level to Maximum, and see if it gives you any helpful error messages.
  19. Have you tried right-clicking on the Call Library Function Node and choosing "Create C File"? That will give you a starting point with the correct function prototype. It appears that since your cluster contains only one element, the cluster is ignored, and you're passing only the array. You can see this by creating a C file from the existing configuration, then wiring the array directly without the bundle and creating a C file - the resulting C prototype is the same. Since you have configured the Call Library Function Node to pass the array parameter as an array data pointer, the value passed to the function is a pointer to the first element of the array, NOT a pointer to the LVCluster structure you've defined. If your function will not resize the array, then this works fine, and you can change your function to (although you might want to add a parameter for the array size): void pointertest(double *source) { source[0] = 3.1; source[1] = 4.2; } However, if you plan to resize the array within your DLL, then you need a more complex solution. One possibility is to change the Call Library Function Node configuration to pass the parameter as "Handles by Value" so that you can use the LabVIEW memory manager functions such as NumericArrayResize. Why are you bundling the array into a cluster at all here?
  20. Again, we're not talking about a non-strictly-typed language here. All the type checking can be done at compile time. Have you done any programming in F# or another ML derivative? I've mostly seen this in those functional languages. For example, generics allow you to write a sorting algorithm that takes a list, and a comparison function that operates on elements of that list. The compiler verifies that the comparison function matches the list element type, so you can't have a run-time type mismatch. Right now you can't do this in LabVIEW. You can either have your list contain variants, in which case you could get a run-time type mismatch, or you could wrap your data in a class that implements a comparison function, but since LabVIEW doesn't allow implementation of interfaces, this only works if your data class inherits from some Comparable class, which won't work if you actually want to inherit from some other class that might not allow comparison. I would love to see this in LabVIEW.
  21. Your description doesn't make much sense relative to what your code is doing. Have you tried probing the wires going to the graph so you can see what you're doing? What about replacing the analog input value with a front panel control so you can test and simulate the behavior easily?
  22. You can handle a Key Down? filter event for that input box, and check if the key was Tab. A filter event allows you to change parameters of the event before it's handled, so you can change the Tab to Enter, then allow the event to process normally. Or, slightly more complicated, you can discard the event if Tab is pressed, and change the focus to the next item (if you want to be extra clever, also handle the shift-tab case for going to the previous item).
  23. Why would this require 2 TCP ports? TCP is full-duplex over a single port.
  24. Again, can you show your client code? Which TCP Read mode are you using? Is there any chance you're doing a zero-length TCP Read on many loop iterations? If so, I think it will return immediately with no error (and no data), which could lead to something like the situation you describe.
  25. You've misunderstood something about TCP communication: you will almost never see a timeout on a TCP write. You can't use it for throttling. It sounds like you're hoping that the TCP Write will only succeed if the receiving side is currently waiting on a TCP Read, but that's not how it works. The operating system receives and buffers incoming messages. If you call TCP Read and there's enough data in the buffer, you'll get that data immediately. If there isn't enough data in the buffer, then TCP Read waits for either enough data to arrive or for the timeout period to elapse. If you want to send data only as fast as the receiving side can handle it, then you need the receiving side to send a request for the data instead of sending it at regular intervals. On the client, it makes no sense to try to run the loop faster than the TCP Read Timeout time. Don't use a timed loop nor any other timing mechanism in the loop, just let the TCP Read handle the loop timing. Can you show your client code? Are you sure you're reading all the bytes on each loop through the client? If you're reading fewer bytes than you write, you could see something like the situation you describe.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.