Jump to content

ned

Members
  • Posts

    571
  • Joined

  • Last visited

  • Days Won

    14

Everything posted by ned

  1. I'm not sure why you'd do that, but it makes sense. Think of the difference between the integer word length and the word length as the power of 2 that a single bit represents. If your word length and your integer word length are both the same, that difference is 0, and a single bit represents 2^0=1 (a fixed-point value with an integer length equal to word length has no fractional portion). If the integer word length is 6 and the word length is 8, a single bit is 2^(6-8) = 2^(-2) = 0.25. In your case, you have an integer word length of 2 and a word length of 1, so a single bit represents 2^(2-1) = 2^1 = 2, which matches what LabVIEW is telling you: the number has 1 bit, which represents either 0 or 2.
  2. I'd love to meet up with other LAVAers who are mentoring FRC teams. I'm planning to attend the Davis (CA) regional this Thursday (3/15) and possibly Saturday with team 3045. I might make it to the Saturday of the San Jose regional as well. Anyone else planning to attend one of these? How many of you mentor a FRC team?
  3. It might be too extreme for your case, but I believe the cRIO has a "Halt system if TCP/IP Fails" option that you can set in MAX.
  4. You could do something like I did here: http://forums.ni.com/t5/LabVIEW/array-of-cluster/m-p/1822451#M625032 which gives you full control over key navigation. I used a table instead of a multi-column listbox, but I think the same approach would work.
  5. There are a number of features that don't work properly over a remote front panel. For simple display that's not a major issue, but it becomes a challenge if you want a fully-featured, responsive user interface. Some examples: RT targets don't support events on front panel controls, so all controls need to be polled; property nodes don't update when no front panel is connected; no way to implement a pop-up message box (for errors, confirmation before taking an action, etc). If remote front panels work for you, use them. If your application later grows, you may find that you need more control over the interface and more separation from the logic running on the RT system.
  6. A bit of a side note, but if you have a DLL that is not thread-safe, you can still configure it to be called in any thread so long as you ensure yourself that it cannot be called multiple times in parallel. The easiest way to do that is to wrap it in a VI that isn't reentrant.
  7. The description makes sense; haven't opened the attachment. It's common in C to check for a null pointer, and this seems analagous.
  8. The Windows Message Queue library lets you catch and handle Windows events, but I don't know which one you'll need to check to determine if your application should shut down. I have used this library to receive Windows messages from another application.
  9. One thing that's important to remember here is that memory allocation, regardless of size, is almost always fast. COPYING data is slow and takes longer for more data. Sorry I can't point you at documentation confirming my answers to your questions, they're all based on the accumulated reading of a lot of posts plus some knowledge of programming and compilers. First, responding to your initial comments: 1) basically true, although the comment about the pointer to the first element isn't completely correct (nor relevant). LabVIEW maintains all the array data in a continuous memory block. In a separate block of memory, it maintains a pointer to the array, the array length, and a stride. There might be other data, too. This makes certain operations, such as reverse array, array subset, and decimate, very efficient. For example, for reverse array, instead of copying the array data, LabVIEW simply allocates a new array information block with the same length, a stride of -1, and a pointer to the end of the array (which is now the beginning of the reversed array) - this is known as a sub-array, and will sometimes show up in the context help when you hover over an array wire. Of course, if some function wants to modify the reversed array while the original array is still needed, then LabVIEW must make a copy. 2) This is correct - each execution of build array could potentially force LabVIEW to go back to the operating system to request a larger block of memory, then copy the existing array into the new, larger location. LabVIEW may request a larger block than necessary and resize at the end of the loop, to minimize the number of allocations and copies. 3) Incorrect. There's no guarantee that there will be room to add to the end of the array; other data may already have been located there, in which case it's necessary to copy the entire array to a new, larger location regardless of whether you're adding data at the beginning or end. As for your questions about deleting from arrays, I don't know the details and it may depend on which LabVIEW version as the compiler optimizations improve. 1 is most likely right. You should be able to delete from either end of an array efficiently, but there may also be differences between "delete from array" and "array subset" that makes one more efficient than the other. When you say shifting, do you mean "rotate array"? Not that it really matters, I don't have any idea how that function is implemented and you probably should not be worrying about it at that level.
  10. While the event structure is supported, events on front panel controls on RT are not: "Event structures on RT targets do not support events associated with user interface objects, such as VI panels or controls. For example, associating the Value Change event with a control does not work. RT targets support only user events." http://zone.ni.com/r...unsupportedets/ (LV2011 help) So upgrading isn't going to solve the poster's question. Edit: I should have read more carefully before replying - missed that Paul was referring specifically to User Events, and it appears the poster may have misunderstood that as well.
  11. Thank you for that information, although it means in retrospect I've written some pretty inefficient code (for example I've often used Type Cast in place of Array to Cluster when I have a large cluster that I modify frequently and I don't want to keep updating the number of items). The use of C-style hints over the terminals makes it look like it's equivalent to a cast in C, where there is, of course, no memory penalty.
  12. Looks right, but one note: you don't need to allocate a second 260-element array, you can just fork the "ShortPath" wire.
  13. Somewhat on this topic... do you ever add some kind of scheduled/timed tasks to your command handling loop? I have a similar structure with a timeout wired to the dequeue, and I recently added code to the timeout (no command) case that runs through an array of "timed tasks" and any that have reached their "next execution time" are added to the command queue. This seems to be a flexible way to add tasks that need to be executed at regular but not precise intervals. Adding a scheduled task is as simple as enqueueing a "Start Scheduled Task" command with a few parameters including a unique name, and stopping it just requires a similar "Stop Scheduled Task" command with that name as a parameter to remove it from the list. Does anyone else use a similar structure? Have you implemented something similar in a different way?
  14. I don't have a lot of experience with this, but the native LabVIEW web services might be very close to what you want - I think it will even return XML. It's the same web server that serves remote front panels, but that's not the only thing it can do. From the help, "By default, a Web method VI returns data to HTTP clients as an XML string. The XML string includes a summary of the data in the output terminals wired to the connector pane of the Web method VI."
  15. From your presentation: The way you've demonstrated the use of SEQs is not the way I typically think they're used in a By-Ref implementation, although I could be mistaken about how others use them. I would use a SEQ in a way that mirrors a DVR: never use Preview Queue and always dequeue before enqueue. This way the queue provides its own locking mechanism, no need for a separate lock. This is also more memory-efficient - you're never making a copy of the data in the queue, which can happen when you use preview queue. This use of a SEQ helps answer your question about queues versus notifiers - you could not achieve this behavior with notifiers (unless you wanted to use cancel notification instead of dequeue element - it might work, I haven't tried it).
  16. This has been asked before; there's no way to take a parent class and turn it into a child containing the same parent data as the input. See for example here: http://lavag.org/topic/13383-how-to-properly-cast-object-to-child-class/
  17. Without seeing your code, yes, my best guess is that with a large enough buffer you can get this to work reliably at high transfer rates. I haven't seen any information about the details of the DMA engine and I don't know of any way to control it; you'll just have to trust or test that it works. You could try posting on the NI forum and see if someone with knowledge of the internals can give you an exact answer.
  18. When you write to the DMA FIFO on the FPGA, you're writing to the FPGA's memory. When you read from the DMA FIFO in Windows, you're reading from the host machine's RAM. Behind the scenes the DMA engine waits until the FPGA memory fills to a certain point, then quietly moves all that data into host memory, with no involvement from the host CPU nor the FPGA. Because the host can have a larger buffer than the FPGA, many of these transfers can occur between calls to read the FIFO on the host. You use the Configure method of an FPGA Invoke Node to set the FIFO depth on the host. If you're trying to read from the FIFO too fast and the read is timing out, there's no particular downside to either waiting longer, or ignoring the timeout and restarting the read.
  19. For streaming data, I'm with neil. You want to do this using a DMA FIFO because you can handle large numbers of elements at a time very quickly (much faster than with handshaking). Unless your data rate is horribly variable, between the Timeout and Number Elements you should be able to read consistent blocks on the Windows side. Note, also, that are two different sizes for the FIFO - one on the FPGA, the other on the host. The host can have a much larger buffer than the FPGA. If for some reason it's necessary you can still use the on-board memory to store samples as they are aquired, then transfer them to the DMA FIFO in a separate loop.
  20. There shouldn't be a problem using a front panel control. The FPGA sets the control to TRUE when data is available. The Windows side sees the TRUE, reads data (for example, from a numeric control), then sets the same data ready control to FALSE. When the FPGA sees that the data ready control is FALSE, it loads a new value into the numeric control and sets the data ready control to TRUE. This works fine, I've used this technique repeatedly (for example, to load a sequence of values from the host into an FPGA memory block). Perhaps the key piece you're missing is that "control" and "indicator" are essentially identical in terms of FPGA interaction - either side (Windows/FPGA) can set and read either one - and a local variable is acceptable (and, as far as I know, reasonably efficient) on the FPGA.
  21. You cannot open a UDP port on a remote machine because UDP is stateless - there's no established connection. You supply the remote address at the time that you send data, and if the remote machine happens to be listening on that port then it might receive the data. The only reason to open a UDP port is to listen, not to send, and the address input exists for the case where a machine has multiple network cards and you only want to listen on one of them.
  22. I don't understand your question. Are you trying to call a C++ function through a Call Library Function node, or are you receiving/sending data over the network to a C++ program, or both? If you're using a Call Library Function node, there is no need to type cast at all - just set the parameter type to Adapt to Type if you're passing the structure by reference. If you're sending over the network, you would always send the actual data, not a pointer, so the question would not make sense.
  23. Where you have a 10-element array, you need to replace that with a 10-element cluster. There should be no arrays in the final cluster - everything must have a fixed size. Once you do that, you can wire your cluster directly to Type Cast, or use it as the Type input and wire a string to it to convert that string into the cluster. For the two-dimensional array of strings, I assume that it's actually a two-dimensional array of chars (bytes) - this is a big difference - and that the intent is to have an array of 32 strings, each 64 characters long, or vice versa. If that is case, create a cluster of 64 U8 values, then put 32 of those into another cluster. The "Array to Cluster" function speeds up this process enormously because you can enter the number of elements you want in the cluster, then create a control from it. To get string data into that format, convert the string to an array of bytes, then use array to cluster. Use the reverse process to get data back into a string from the cluster. Here I've modified your VI to demonstrate this. In the process I removed one dimension from the string array, so that you have an array of strings, not a two-dimensional array of strings. LS35_SetFrequency-modified.vi
  24. Is the receiving program expecting LabVIEW data, or is it expecting a C struct? If the latter, you will want to use the Type Cast function, but in order to do that you need to replace all of the arrays in your cluster with clusters containing the number of elements defined in the C struct definition. Arrays in LabVIEW have variable lengths; arrays in C are either pointers or are fixed-size. In your case you're dealing with fixed-size data, so you need to replace the LabVIEW arrays with an equivalent fixed-size structure, which is a cluster.
  25. All the U8 array manipulation is useless. The type input to Unflatten From String just needs the datatype - string, in your case - and the actual length and content of that string is irrelevant. What are you trying to get out of this? If you just need the first 260 elements in the string, then use string subset. There's no reason to unflatten a string from a string; the flattened form of a string is just the same string, with the length preprended. If you're getting that data from C then the length information isn't there, which will prevent LabVIEW from unflattening it correctly.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.