Jump to content


  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by ned

  1. From a quick read through the datasheet for the evaluation kit, I see the following: "The onboard MAXQ622 is only for demonstration purposes (USB interface) and does not implement a HART software stack. The serial UART data and control pins are exposed via jumpers as detailed in Table 1. They allow a hardware serial port on a PC or laptop to transmit and receive modulated signals. Note that these serial pins expect TTL level signals (3.3V), not RS-232 level, so a serial level shifter board is typically required for communications." Looks like the easiest way to connect this to LabVIEW is through a serial port connected to the serial connections on the board, with appropriate level shifting (or buy a USB-serial adapter that expects TTL level signals).
  2. Does the Maxim software come with a DLL or .NET assembly for interfacing to their device? If so, that's your best bet. While you can use VISA to talk directly to a USB device, you'd have to determine the correct protocol (through documentation or experimentation) and implement it, which would be very time consuming.
  3. Note that this also means that if you place an event structure inside some other structure - for example, in a case structure - then events will stack up in the queue for that event structure even if there's no possibility of it running. Coupled with the option to lock the front panel until event processing completes, this can (and often does - see the NI forums) cause a lot of confusion, because the user clicks a button and the front panel freezes but since the event never processes, the front panel never gets freed.
  4. I'd check your code very carefully. I've seen something like this before and finally traced it to a difficult-to-find coding error. I don't remember the exact details, but my vague recollection is I had some code that obtained a new notifier if the existing notifier input was invalid, coupled with a situation where I wasn't properly passing the notifier ref out only when the the existing notifier was valid (could have been a "use default if unwired" sort of thing). As a result, a probe on the wire would rapidly alternate between a valid and invalid notifier refnum. EDIT: and then I looked at your images more closely... no idea. I don't have LV2015. Does every notifier refnum do that, or just one specific instance? You're not doing something funny like casting it to an int, then back to a notifier, right?
  5. I don't think they've changed. What do you mean by "showing oddly"?
  6. Reshape and Transpose aren't free. Reverse and Decimate are; see https://lavag.org/topic/7307-another-reason-why-copy-dots-is-a-bad-name-for-buffer-allocations/
  7. Learning more programming languages can only make you a better programmer, not a worse one. Sure, C programmers are often lousy LabVIEW programmers at first because the model is so different, but that doesn't mean they don't (or can't) eventually become proficient, and that's certainly not unique to LabVIEW. C programmers also tend to write poor code in functional languages when they first get started; for that matter, they struggle with Matlab's model for vectorizing array operations (I say this having learned BASIC, then C, and struggled initially with both LabVIEW and Matlab in college). If you're going to learn another language, C is a good choice because it's: - simple (very few keywords) - been around a long time, still in use, stable and well-established - the basis for so many other languages (even in LabVIEW: where did the funny % format specifiers come from? The C scanf/printf functions) - the standard for defining the interface for functions in shared libraries (not just DLLs on Windows, but on other platforms too) I'll leave aside the "real language" question, other than to note that whether a language is text-based is irrelevant. Very early computer programming involved literally connecting wires, and the people who were proficient at that probably thought C wasn't "real" programming too (yes, I know I'm skipping years of development there, including punch cards and assembly)
  8. Why would you want to create an XNode for this, rather than a standard VI? It sounds like either you've misunderstood your professor, or your professor doesn't understand what he wants.
  9. No. You'll need to tell the operating system what the preferred route is to the server. In Windows you can set network priority so that it will preferentially use a particular network card (for all connections), or use the command-line "route" command to set up a route specifically to your destination. You could do a system exec call to set that up before opening the TCP connection. I had to do something similar with a device that insisted that its MAC address be manually added to the arp table before it could communicate.
  10. I had an application that would stop reading from the serial port (I'd get a timeout, not a "freeze", but I reduced the default timeout time) that was fixed by switching from asynchronous to synchronous, or vice versa. The reads were relatively infrequent (once per second or so) and switching modes had no effect on performance. This was with a cheap USB-to-serial adapter.
  11. Cool idea. You might be interested in the similar-looking LabSocket (https://lavag.org/topic/17164-new-release-of-labsocket-the-easy-way-to-extend-labview-to-the-web/), which I think uses a similar approach (although I haven't used it myself nor have I downloaded your code).
  12. I can't contribute a lot to the LabVIEW side of this discussion, but I wouldn't try to outsmart the OS in terms of file IO. The OS can delay or cache writes and might implement different schemes depending on the type of disk (that is, it will schedule writes to a SSD differently than to a standard disk). Writing larger chunks is usually better than writing small chunks, and if you know how large your file wills be in advance you might get some benefit from setting the file to that size initially, but other than that I'd let the OS do the work.
  13. I haven't opened your VI - I'm still on LV2012 - but for the specific set of arguments 0,NULL you just need to pass two zeroes, both passed by value. Very easy. The first should be configured as an I32, the second as a pointer-sized integer (again, passed by value). No need to mess with arrays.
  14. A similar question gets asked frequently over on the NI forums, take a look at these threads (this is just from the first few search results): http://forums.ni.com/t5/LabVIEW/Dynamic-combo-box-in-cluster-array/td-p/3180033 http://forums.ni.com/t5/LabVIEW/How-to-populate-an-array-of-ring-controls-with-strings/td-p/1841053 http://forums.ni.com/t5/LabVIEW/Creating-an-Array-of-Rings-Programmatically/td-p/3070688 http://forums.ni.com/t5/LabVIEW/How-to-create-an-array-of-ring-with-a-different-items-values-for/td-p/2823240 http://forums.ni.com/t5/LabVIEW/Dynamically-changing-elements-in-an-array-of-menu-ring-controls/td-p/3141078
  15. How did you configure the call library function node? What exact error occurs? "Does not work" is not enough information. Show us your code.
  16. While this is unlikely to be the problem, is there a difference in the network connections in how A and B are connected to the server? Is one directly on the same switch, and the other further away?
  17. In my experience, the Nagle algorithm isn't as problematic as everyone makes it out to be. Also, 2-4 seconds is longer than I'd expect due to Nagle-related delay. My first guess is you have a TCP read somewhere that's expecting slightly more data than it actually receives, so it waits the full timeout period. What TCP Read mode are you using? Let's say the client is using CRLF mode, but the server doesn't append the end-of-line character to the response - TCP Read will wait the full timeout period, and still return a valid response (assuming you don't check for that CRLF).
  18. I think a picture control is the way to go. I would start with a simpler interface than dragging the item around on the screen. Instead you might have buttons for forward/back along with a control for the distance to move, and also a rotation button with number of degrees. Another option would be to use the joystick. For one FIRST team I mentored, we made a simple demo where they could move a shape around within a picture control using the joystick. If your image is simple (for example a rectangle), rotation is not too complicated, you might start with the "2D Cartesian Coordinate Rotation" function.
  19. A minor note: these are equally preferable, and as I understand it they're implemented the same way in the compiler.
  20. You should close references to all .NET objects when you're done using them. This includes objects that you didn't explicitly open in LabVIEW but are returned from .NET methods. Close references in the reverse order from opening them. If you call a method on an object A that returns object B, close the reference to object B before closing A.
  21. No, you cannot directly pass a cluster as a struct. Instead you need to create an instance of the struct (using a constructor), then set the value of each field, as shown here: http://forums.ni.com/t5/LabVIEW/How-to-transfer-structure-contains-structure-to-net-assembly-in/td-p/2363726
  22. Another solution, although this may sound overly complicated, is maintain a buffer of all the received bytes from a given connection yourself. Since I'm already bundling a timestamp with my TCP connection IDs, adding a string to that cluster isn't a big deal. Each time I read bytes from a connection, I append them to the existing buffer, then attempt to process them as a complete packet (or as multiple packets, in a loop). Any remaining bytes after that processing go back in the buffer. There might be another approach to work around the Nagle algorithm. Try doing a read (even of 0 bytes) immediately following each write. I believe this will force the data to be sent, on the assumption that the read is waiting for a response from the just-sent packet. I'm not completely certain of this, but I think I tried it once and it worked. For a heartbeat, if you're just checking if the remote system is running, it might be worth switching to UDP.
  23. Error 56 - a timeout - isn't really an error, it just means there wasn't any data. The connection is still valid, and data could arrive later. In most of my TCP code I ignore error 56, especially if I'm polling multiple connections and expect that there won't be data on most of them most of the time. I bundle my TCP references with a timestamp indicating the last time that data was received on that connection, and if it's ever been longer than an hour since I last received data on that connection, then I close that connection. I've used this same approach with the STM (adapting existing code that used it) and it worked fine there too.
  24. Yes, that makes sense, because LabVIEW does know the size of the array elements.
  25. Learn C. All of this is based on C conventions. When you understand C data types and pointers, the LabVIEW part will make sense (mostly - I still get thrown off by the way clusters are packed versus C structures). Unfortunately I don't know of any other way to learn it.
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.