Jump to content

ned

Members
  • Posts

    571
  • Joined

  • Last visited

  • Days Won

    14

Everything posted by ned

  1. Simple, slow solution: run your .NET application using System Exec, parse the output string in LabVIEW. But then you need to run the .NET application once per piece of data you want to retrieve; it can't run continuously (since LabVIEW won't get the output string until the application terminates). Better solution: instead of writing the results to the console, push them to a UDP port, and let LabVIEW receive those UDP packets. I've used this sort of approach in several places and it's worked well. For communication between application on the same machine, UDP is simple, fast, and reliable (the loopback port shouldn't lose packets). Fully integrated approach: rework the .NET code so that you can call it through the LabVIEW .NET functions, but this might involve rewriting a lot of code, especially if some part of the .NET code needs to run continuously in the background.
  2. My apologies for the confusion. I believe my comments were accurate for other self-contained RIO devices (such as the cRIO); I haven't worked with a myRIO and didn't realize it has a USB interface. That said, as others have already explained, there's no way for the myRIO's FPGA and your C# code to share memory directly, nor can your C# code write directly to the DMA buffer. Sounds like others already have you pointed in the right direction on this.
  3. Just ran across this http://fsharpforfunandprofit.com/rop/ (Railway Oriented Programming) in which the author represents normal input and an error condition as two railway tracks, with a result that looks like LabVIEW's error wire. Made me think of this discussion. In the presentation, errors are handled as Discriminated Unions, not as tuples, where the value can either be Success (of a value) or Failure (of an error). Interesting to see a functional error-handling implementation that resembles the LabVIEW approach.
  4. Let's clarify "Host" and "Target" here. Your myRIO is running a Real-Time operating system; that's considered the "Host." It also contains an FPGA; that's the "Target." If the myRIO supports it, you can also connect directly from the Windows machine to the FPGA, bypassing the Real-Time system, in which case the Windows computer is the "Host" and all FPGA communication between the host and target goes over Ethernet, which is a bit slower. I believe, but don't have a way to test or confirm, that you can still use a DMA FIFO on a remote FPGA target. However, if you are using the Real-Time portion of the myRIO, then the host-to-target DMA FIFO will transfer data from the Real-Time system to the FPGA, and you'll need some other way (such as TCP or UDP) to send data from the Windows computer to the myRIO.
  5. Yes, you can put the array in a memory block on the FPGA, and then transfer one element at a time to the host using a front panel control. On the FPGA front panel you'll need three elements: an address, a value, and a boolean to signal to read. The host sets the address, then sets the boolean true. When the FPGA sees the true value, it reads the address, retrieves it from memory, writes the value, and sets the boolean false. When the host sees that the boolean is false, it reads the value, then updates the address and sets the boolean true again, etc.
  6. LabVIEW is often used for hardware control and user interfaces. Both involve side effects - things that happen that aren't part of a computation and often must occur at a particular time or in a particular sequence. Functional languages deliberately don't deal well with side effects. While there are ways around this, it might become tedious. LabVIEW also allows functions (VIs) to return multiple values - something I haven't seen in any other language, and which I think would limit the options for higher-order functions. The lack of tail recursion in LabVIEW is also an issue. That said, there are some ways in which LabVIEW resembles a functional language, particularly the immutability of values on wires. I'd love to see a type inference system, similar to the one in many functional languages, make its way into LabVIEW, since that would make it much easier to write polymorphic VIs. I tried to explain this in the thread Yair mentioned.
  7. That's exactly the point I was trying to make, although perhaps I didn't explain it well. A proper user interface should display only the options that are relevant for that choice. In the case of a collection of generic generator classes, somewhere you'll need to check the exact type of the child class to determine the correct options. At that point, you've determined the correct child class, so you can cast the generic item to the specific child class and access any class-specific properties.
  8. I would choose option 2, because I think it's misleading to make it possible to set meaningless properties on a class just to make it fit the interface. Is it that much additional complexity for the calling code? You know which specific child you have when you instantiate it, it's not that much complexity to set some class-specific properties at the same time. If you want to change it further down the line, yes, there's a bit of complexity in checking if that property is valid for that class, but let's say you're presenting this to the user - would you want the user to be able to change a property that didn't have any effect? So you'd need to do that check anyway, or confuse the user when they change that property and it doesn't do anything because that child class doesn't use it (which I'd consider a poor UI choice).
  9. If you want to do this using a table, here's some sample code that shows one way you can do it. As Hooovahh mentioned, it uses a table indicator, and controls that are moved on top of the table for each cell to make it appear that the table is editable. http://forums.ni.com/t5/LabVIEW/array-of-cluster/m-p/1822451#M625032
  10. Forgot about this thread for a month. Shaun, you might have misunderstood how strict typing works in F# (I probably explained it poorly). When you write a function in F#, for the most part you don't specify the actual types of the parameters. Instead, the compiler determines what operations need to be possible on those parameters, and then only allows values to be passed as those parameters when they support the required operations. As a simple example, a sort function might accept an array of an unspecified type, and the only operation that needs to be supported is comparison. The compiler will then let you pass an array of any type that supports comparison to that function. So you get the benefits of both generic functions (because functions don't specify parameter types) and strict type checking (because the compiler can determine at compile time whether the parameters are valid types). That's something I'd love to see in LabVIEW, too.
  11. If you want to program your device through a direct serial connection to the microcontroller, then yes, the microcontroller needs to be running some code (a bootloader) that knows how to interpret the data it receives and write it to the appropriate memory locations. Getting that bootloader code installed in the first place requires the use of an external programmer (another hardware device). Whether you need to do any additional processing of the hex file before sending it depends on the bootloader. If the bootloader knows how to interpret the hex file format, then you don't need to do anything. However, if the bootloader expects data in some other format, then you'll need code to do that conversion.
  12. You'll need to provide a lot more information about the microcontroller and how to program it before we can answer that question.
  13. Do you have a link to that discussion? There are so many things one can do with an FPGA, that I can imagine that in one application it would seem weird to branch an FPGA reference, whereas in another it's completely logical. One of the great things about the FPGA is that parallel loops are truly parallel and independent, so it can make sense (to me, anyway) to treat those loops almost as separate devices. Even if there's only one FPGA loop, I've written code where the host side forks the reference so that one host loop can handle the ongoing, routine operations (such as reading data from a FIFO) while another handles the occasional updates (changing a setting by writing to a front-panel element). At least in my opinion, if branching the FPGA reference to multiple loops lets you separate functions logically and results in clean code, there's no reason not to do it.
  14. Reviving an old topic here because I've been playing with F# a bit at work, and it's made me wonder - what if LabVIEW had a type-inference system similar to F# (or ML, or some other functional languages)? Meaning that the compiler can deduce data types at edit/compile time, but it deduces that type only to whatever level of specificity is actually required. For example, in a sort function, so long as there is some way to compare two items of the same type, the actual type doesn't matter. This allows you to write generic functions and still get the benefits of strict typing, and type correctness at compile time. It might be an alternate approach to OOP interfaces for LabVIEW. Of course it would change the look of the language - wires could no longer be colored based on type, since many functions would accept multiple types. I realize it's unlikely to happen in LabVIEW any time soon. Anyone else played with this aspect of functional languages? Comments or thoughts as to whether it could be done in LabVIEW?
  15. Odd. Another option is to treat it as an array of U8 instead of a string. Again you'll need to allocate enough space for it initially. Then you can convert from U8 to string yourself. Can you share your code where you configure the parameter as a string, and also the prototype for the function you're calling? I assume you've set that parameter to be a C string.
  16. If that parameter is a string, why not just configure it as a string in the Call Library Function Node setup? Strings are always passed by pointer, and LabVIEW knows how to deal with it. Also, I assume you know what the maximum string length is. You should preallocate memory for that string, and pass it in on the input side. You can preallocate a string by initializing an array of U8 to the correct size, then converting the byte array to a string. Or, you can set the string length in the Call Library Function Node setup.
  17. I don't know about "supported" but calling functions within LabVIEW.exe is the standard way to do memory allocation when it's necessary to pass a pointer to a DLL function, and this is well-documented: https://decibel.ni.com/content/docs/DOC-9091
  18. No. And in case you start wondering, it's not like there is one that's hidden; there isn't one at all (even if there was, there wouldn't be any applications to launch from it so it wouldn't do anything). On some (maybe all) cRIOs you can plug in a serial cable, enable the serial console output, and watch the boot sequence in a terminal, but that's as close as you'll get. A real-time operating is much more basic and limited than what you probably think of as an operating system.
  19. Can you post the header file (the function prototypes) for the real functions you want to call? The prototype for the wrapper around the function that returns the datablock struct by value should be something like: void some_other_func_alt(datablock *db); Then you call that from LabVIEW, pre-allocating the datablock struct as a cluster and passing it (Adapt to Type) to the wrapper. Inside the wrapper, do a memcpy from the returned struct, into the struct that was passed by pointer. While it would be nice to avoid that extra memory copy, I'm not sure there's a way to do so.
  20. I think his goal is to drop a VI onto the block diagram and have it be created as a new, unsaved VI that already contains some code, similar to what happens when you select some code and do Create SubVI. I can see why this might be neat, but I don't know how you would achieve it.
  21. Blocking and determinism don't really go together. What are you trying to achieve here? Do you want the deterministic loop to wait until it receives a new value (at which point it won't be deterministic anymore), or do you want that loop to run at the same rate consistently regardless of whether there is new data? If the latter, one approach might be a functional global variable, with the "Skip Subroutine if Busy" option set (and add a boolean output that confirms the data is good before you use it, in case it is skipped).
  22. I really hope one of them has an exciting-sounding name, but actually just introduces random crashes.
  23. Shaun's probably right, but in case you do need to sign-extend, one approach is this: http://forums.ni.com/t5/LabVIEW/24bit-hex-to-2-s-complement/m-p/1686050#M599007
  24. Also note that any time you connect a solenoid valve to a relay, it is a good idea to install a flyback diode as well.
  25. I don't know what's wrong with your code, but for comparison purposes, others have implemented SPI using a DAQ card: http://www.ni.com/example/31163/en/ http://forums.ni.com/t5/Digital-I-O/Implementing-I2C-or-SPI-with-Pxi6508-in-Labview/td-p/554383 (scroll through the thread a bit)
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.