Jump to content

ned

Members
  • Posts

    571
  • Joined

  • Last visited

  • Days Won

    14

Posts posted by ned

  1. 5 hours ago, ASalcedo said:

    So if user writes "true" in these options, can they debug in real time and because of that get my block diagram?

    No. If you do not check "Enable Debugging" in the build specification, and do not explicitly uncheck the "Remove Block Diagram" option for a VI within the build specification, then the block diagram is removed during the build process and cannot be recovered from the executable. Setting the INI file option might allow the user to connect with the debugger, but they will not actually be able to debug anything because the block diagrams aren't there.

    • Like 1
  2. The PostLVUserEvent function exists so that LabVIEW can call a function in a DLL that will trigger a user event. For example, if you are trying to use a DLL that requires a callback, you can create a callback function (in C or similar) that generates a user event, and pass that function as the callback to the DLL. Then, when the callback is triggered, a user event is generated.

    As I understand it, what you're trying to do here is have Python make the initial call to LabVIEW, rather than a LabVIEW application making a call to an external DLL. Is that correct? If so, your Python code and your LabVIEW code are two separate processes (in the operating system sense of the word) and you won't be able to have one of them call a function in the other one (ie, you can't have Python call a function inside the LabVIEW process). You need some intermediate inter-process communication layer, such as TCP as already suggested (if everything is running on the same computer I'd vote for UDP instead since it's faster and simpler and you won't lose packets over a loopback interface).

  3. 1 hour ago, shoneill said:

    We should get together and pester NI to offer an in-place bit-preserving method to convert between different 32-bit representations!

    Seriously. In my case I was finding that the time required to receive an image and send it to the FPGA was divided into roughly equal amounts of time for the TCP Read, the Type Cast to U32, and the DMA Write, suggesting that each one involved a similar operation (copying all the received data). I don't ever imagine that TCP Read could write directly into the DMA buffer, but it would be nice to cut out the Type Cast copy.

  4. Importantly, this means that Type Cast is not an in-place operation the way it would be in C (the help for Type Cast suggests that it operates like a C cast). If you have, say, a string that you want to cast to an array of U32, all of that data may get copied even though it's unchanged.

    This has actually caused me problems on a single-board RIO. My code received an image over TCP, and TCP Read outputs a string. That data needed to be passed to the FPGA through a DMA FIFO as a U32 or U64 array. The time required for the Type Cast operation caused a substantial delay and I couldn't find any way around it.

  5. 1 hour ago, hooovahh said:

    I'd find it hard to believe such an old piece of documented code has a bug this big in it, but you are starting to convince me.

    This is not necessarily old documented code. NI periodically "improves" the PID VIs, and I found a bug in one of them a couple years back: http://forums.ni.com/t5/LabVIEW/RT-PID-toolkit-does-not-match-standard-PID-toolkit-for-arrays/td-p/501080

    Thank you for pointing out this problem. I just (last week) ran into what is probably the same issue - my process variable is well above the setpoint but sometimes the PID output still turns on.

  6. I've had to do this on occasion to force a shift register to the correct data type. I suspect that if you take it out, that shift register will become a floating point, since that's usually the default for a numeric function and there's nothing else in the code to force it to a particular numeric type.

  7. Use DSNewPtr, which you can call through a Call Library Function Node with the library name set to "LabVIEW". See the LabVIEW help or posts on the NI forum about this function and the other memory manager functions. You then use MoveBlock (same idea) to copy data between LabVIEW and that pointer.

    • Like 1
  8. I took a look at the manual for two of the three - the Newport and the Keithley - and from the (lack of detailed) information there, the best I can say is that there's no way to tell from the documentation whether the gain values have the same meaning for those two. It's possible but I'd say unlikely that you can copy the gains from one controller to the other and get the same results, unless it turns out that they all use the same OEM controller component internally, which wouldn't be unheard of. To determine compatibility, at a minimum you'd need know the units of the controller gains - for example, is Ki relative to minutes, seconds, ticks of the controller's internal clock, or something else entirely? I don't see that information in the manuals.

  9. Sorry I don't have an answer for you, but I do want to clarify that there's no OCR happening. Instead, when LabVIEW prints the front panel, it sends the text portion to the printer driver as text, with coordinates as to where that text should go on the page. This allows the printer to use information from the font to take advantage of the full printer resolution, making the text look cleaner. It also allows a printer driver such as for PDFs to include text as text rather than as part of an image. LabVIEW also provides a "Get Front Panel Image" method which is what I assume is used by the LabVIEW-specific PDF drivers, and in that case it's retrieving the entire front panel as a single image which includes all the text.

    While I understand your desire not to be tied to a single PDF printer driver, the best route is probably to research the available PDF printer drivers and find one that provides the features you want.

    • Like 1
  10. Note that you should move the # Tunnels input outside the For loop, since otherwise you could change that value while the loop is running, and LabVIEW has to account for that possibility. There's also debugging issues here - the additional tunnels are places where LabVIEW needs to allocate memory in case you put a probe on them during execution, but maybe that's actually helping here by changing the way LabVIEW reuses memory. Of course, if you disable debugging, the VI executes instantaneously because it optimizes out the unnecessary For loops. I am seeing the 2 tunnel case fastest at 29ms, the 1 and 3 tunnel versions nearly identical and barely slower (30.5ms), and the 3-tunnel version slowest at 39.5ms.

    I don't think you're learning much from this sort of benchmark when you have debugging enabled and code that could otherwise be optimized out.

    • Like 1
  11. No, the other way around. Let's say you have some data that passes through a dynamic dispatch VI (that is, there is an input and a corresponding output of the same type). The parent method passes the data through unchanged, so LabVIEW expects that it can reuse the input buffer for the output buffer. Now you override that method in a child, and you do modify the data. At compile time, LabVIEW uses the parent class (it can't know about all the possible child overrides) to determine whether it can expect to reuse the buffer. At runtime, when the child class runs, LabVIEW discovers that it needs to create a copy (to accommodate the modified data) and so it needs to allocate an additional buffer.

    Now, if instead you put an in-place element structure in the parent, and marked the terminal as a modifier, at compile time LabVIEW will pre-allocate a buffer for the output, even though it's not necessary in the parent, which will allow the child that modifies the data to execute a tiny bit faster.

    I'm not sure why this would ever be a useful option for a DVR.

    • Like 1
  12. 3 hours ago, stefanusandika said:

    In the case of localhost communication, with continuous data exchanges from multiple client to server, what method would you use?

    My first thought is UDP or VI Server, but there might be a good way to do it with shared variables since if I'm not mistaken (I haven't used shared variables, just read about them) you can have a single shared variable engine storing and sharing data for multiple clients.

  13. The problem was that I made the FIFO size to the FPGA too big, that design would make the SW easier just One Write on the RT

    That's not necessary. The FIFO size on the host side is independent of the FIFO size on the FPGA. You can configure the host-side buffer to be much larger and then do a single write as you originally intended. The DMA automatically copies data from the host buffer into the FPGA buffer as space becomes available.

  14. Here are some suggestions. Sorry I don't have time to understand your code and rewrite some of it to illustrate these ideas.

    - Wherever possible, read from and write to IO and memory in a single location, to avoid arbitration, even if the code logically can't possibly execute multiple occurrences in parallel. It's fine to read memory in one location and write it somewhere else, but for each memory block I would try to have a single instance of the memory read, even if that means you read and discard the data most of the time. Likewise, move reading the digital inputs outside the case structure.

    - In order to move the digital inputs outside the case structure, you'll need to restructure your state machine. I would create a lot more states, replacing the nested while loops with their own states where you don't proceed to the next state until the trigger occurs.

    - Once you've done that, you can probably make the upper loop into a single-cycle timed loop (unless the analog outputs don't support it, which you could work around by putting them in their own loop and setting their value through a register). That should save you some FPGA space.

     

    I would also check your logic. As far as I can tell, both TriggerPulses and the shift register value to which it's compared in the "Wait for Falling Trigger" case never change, so I don't see how you would ever move on from that case, unless the host updates the TriggerPulses value.

  15. This could be useful for transferring data to an FPGA, not just from it. I wanted to use an external DVR on a project that involved an sbRIO, but unfortunately the board didn't support the Acquire Write Region method. That project involved driving an inkjet printhead. The FPGA tracked encoder counts, enabled individual nozzles in the printhead, and triggered the printhead to fire. The image to be printed came in over a TCP connection. One major problem was there was no way to avoid making multiple copies of the image in memory, even though the actual content didn't change. First, converting the string (from the TCP connection) to integers (DMA FIFO format) incurred a copy, and then a second copy occurred to copy the integer array into the DMA FIFO. Direct access to the FIFO memory area would have eliminated the second copy (and if this idea were implemented, it would have avoided the first one). I benchmarked the performance carefully, and each copy of a multi-megabyte file turned out to the be comparatively time consuming.

  16. There are a lot more issues to consider than the ones you listed - in particular, the size of the host-side FIFO buffer, the speed at which you read elements out of the FIFO on the host, and the number of elements you read on the host at a time. What is your host code doing? Show us your code.

     

    A DMA FIFO has two buffers: one on the FPGA, and one on the host. For a target-to-host (FPGA to RT) FIFO, the FPGA fills its buffer, and in the background the contents of that buffer are automatically moved to the host buffer periodically or when the buffer is full, whichever happens first, assuming there's room available in the host buffer. (For a host-to-target FIFO, this happens in the other direction - elements are copied out of the host buffer to the FPGA when the FPGA buffer has space available.)

     

    If you fill the FIFO buffer faster than you empty it, you'll start losing data. This will probably appear as though the channels are shifted. You can fix this by reading from the FIFO on the host more frequently, or increasing the size of the host buffer and reading more elements at a time.

  17. When would you ever want to do that? I always want to control when a VI runs, either statically as a subvi on a block diagram, or through the proper VI Server method if it's a dynamic call. Do you mean that, for example, you would use a simple "Open FP" invoke node and that will run the VI automatically in addition to opening its front panel? Any other use case?

    You'll see this sometimes when the developer doesn't have access to the application builder, but wants the user to be able to double-click the VI on their desktop and have it start running immediately instead of being presented with the opportunity to edit the VI.

     

    If you need to edit a VI with that setting enabled, you can drop it as a subVI through "Select a VI..." and then control-double-click it to get to the block diagram.

  18. There's no one right way of doing this, it depends on your factory. Your approach with a variant seems reasonable. If you're reading the settings from a file, you might consider using a JSON or XML string instead (an INI file section also works). That way you could read the class name from the string, instantiate the appropriate class, and pass the entire string to a VI in that class to parse. This also makes the settings easier to edit by hand if necessary, and provides some ability for the VI to recover from incomplete settings information. It sounds like it's unnecessary in your application, but in some cases your factory classes might implement a VI that prompts the user to enter the appropriate settings for that specific class.

  19. This sounds like a good application for UDP. You won't lose data transferring between two applications on the same machine. I've done exactly this - built a DLL in LabVIEW that acted as a plugin for another application and provided minimal UDP communication, then sent and received data from a separate LabVIEW application. Of course, you can also use UDP directly from C#. Another option might be ActiveX.

  20. Does this example help? It demonstrates calling a VI that's part of one application from another application using VI Server. In this case the remote VI contains a user event reference that's used to generate an event in the remote application, but the exact mechanism isn't important. The attached project is saved in LabVIEW 2014 with build specifications that have configuration files set appropriately to enable VI server; there's a version without the build specifications in LabVIEW 2012 (with snippet images) at http://forums.ni.com/t5/LabVIEW/sync-method-between-two-applications/m-p/3217891#M934603.

    VIServerFGVwithEvent.zip

    • Like 1
  21. If I remember correctly, NI deliberately disabled this when you built into exe after LabVIEW 2009(?).

    Not true. NI did change the internal format of executables, which changes the way you refer to the path of a VI within an executable, but it's still possible to call a VI in another running executable through VI server.

  22. When you try to open the reference to the VI, are you passing in the path to the VI, or a string with only the VI name? If you are passing in the VI path, it may have changed between LabVIEW 8.6 and 2011. Try passing in a string containing only the VI name instead (the connector defaults to a path, but will accept a string).

     

    It would be helpful if you post your code, or at the very least the error that occurs when you try to open the remote VI reference.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.