Jump to content

ned

Members
  • Posts

    571
  • Joined

  • Last visited

  • Days Won

    14

Posts posted by ned

  1. Simple, slow solution: run your .NET application using System Exec, parse the output string in LabVIEW. But then you need to run the .NET application once per piece of data you want to retrieve; it can't run continuously (since LabVIEW won't get the output string until the application terminates).

     

    Better solution: instead of writing the results to the console, push them to a UDP port, and let LabVIEW receive those UDP packets. I've used this sort of approach in several places and it's worked well. For communication between application on the same machine, UDP is simple, fast, and reliable (the loopback port shouldn't lose packets).

     

    Fully integrated approach: rework the .NET code so that you can call it through the LabVIEW .NET functions, but this might involve rewriting a lot of code, especially if some part of the .NET code needs to run continuously in the background.

  2. I am not sure if this is entirely true. Go look through the documentation for DMA/FIFO on the labview zone pages and clear this. You don't have to go over ethernet to communicate with the myRIO either. With correct memory mapping (_http://www.abhisheksur.com/2012/02/inter-process-communication-using.html) or "Named Pipe (_http://msdn.microsoft.com/en-us/library/bb546085(v=vs.110).aspx)", or WCF Interprocess Communication (_http://tech.pro/tutorial/855/wcf-tutorial-basic-interprocess-communication), I believe communication can be bridged.

    My apologies for the confusion. I believe my comments were accurate for other self-contained RIO devices (such as the cRIO); I haven't worked with a myRIO and didn't realize it has a USB interface. That said, as others have already explained, there's no way for the myRIO's FPGA and your C# code to share memory directly, nor can your C# code write directly to the DMA buffer.

     

    Sounds like others already have you pointed in the right direction on this.

  3. This isn't problematic. VIs that return multiple values are analogous to functions that return tuples.

    Just ran across this http://fsharpforfunandprofit.com/rop/ (Railway Oriented Programming) in which the author represents normal input and an error condition as two railway tracks, with a result that looks like LabVIEW's error wire. Made me think of this discussion. In the presentation, errors are handled as Discriminated Unions, not as tuples, where the value can either be Success (of a value) or Failure (of an error). Interesting to see a functional error-handling implementation that resembles the LabVIEW approach.

  4. The problem is I want to send the z-coordinates to the fpga on myRIO using the Host to Target scope in the link above. So far, I can't figure out how to do that. 

    Is it even possible for labview to access physical memory on my windows computer?

    Let's clarify "Host" and "Target" here. Your myRIO is running a Real-Time operating system; that's considered the "Host." It also contains an FPGA; that's the "Target." If the myRIO supports it, you can also connect directly from the Windows machine to the FPGA, bypassing the Real-Time system, in which case the Windows computer is the "Host" and all FPGA communication between the host and target goes over Ethernet, which is a bit slower. I believe, but don't have a way to test or confirm, that you can still use a DMA FIFO on a remote FPGA target. However, if you are using the Real-Time portion of the myRIO, then the host-to-target DMA FIFO will transfer data from the Real-Time system to the FPGA, and you'll need some other way (such as TCP or UDP) to send data from the Windows computer to the myRIO.

    • Like 1
  5. Yes, you can put the array in a memory block on the FPGA, and then transfer one element at a time to the host using a front panel control. On the FPGA front panel you'll need three elements: an address, a value, and a boolean to signal to read. The host sets the address, then sets the boolean true. When the FPGA sees the true value, it reads the address, retrieves it from memory, writes the value, and sets the boolean false. When the host sees that the boolean is false, it reads the value, then updates the address and sets the boolean true again, etc.

  6. LabVIEW is often used for hardware control and user interfaces. Both involve side effects - things that happen that aren't part of a computation and often must occur at a particular time or in a particular sequence. Functional languages deliberately don't deal well with side effects. While there are ways around this, it might become tedious.

     

    LabVIEW also allows functions (VIs) to return multiple values - something I haven't seen in any other language, and which I think would limit the options for higher-order functions. The lack of tail recursion in LabVIEW is also an issue.

     

    That said, there are some ways in which LabVIEW resembles a functional language, particularly the immutability of values on wires. I'd love to see a type inference system, similar to the one in many functional languages, make its way into LabVIEW, since that would make it much easier to write polymorphic VIs. I tried to explain this in the thread Yair mentioned.

  7. Hmmm, what if the UI doesn't expose that choice unless the user selects the White Noise option?

    That's exactly the point I was trying to make, although perhaps I didn't explain it well. A proper user interface should display only the options that are relevant for that choice. In the case of a collection of generic generator classes, somewhere you'll need to check the exact type of the child class to determine the correct options. At that point, you've determined the correct child class, so you can cast the generic item to the specific child class and access any class-specific properties.

  8. I would choose option 2, because I think it's misleading to make it possible to set meaningless properties on a class just to make it fit the interface. Is it that much additional complexity for the calling code? You know which specific child you have when you instantiate it, it's not that much complexity to set some class-specific properties at the same time. If you want to change it further down the line, yes, there's a bit of complexity in checking if that property is valid for that class, but let's say you're presenting this to the user - would you want the user to be able to change a property that didn't have any effect? So you'd need to do that check anyway, or confuse the user when they change that property and it doesn't do anything because that child class doesn't use it (which I'd consider a poor UI choice).

  9. I hate strict typing. It prevents so much generic code. I like the SQLite method of dynamic typing where there is an "affinity" but you can read and write as anything. I also like PHPs dynamic typing, but that is a bit looser than SQLite and therefore a bit more prone to type cast issues, but still few and far between. That is why sometimes you see things like value+0.0 so as to make sure that the type is stored in the variable as a double, say.

     

    Generally, though. I have spent a lot of time writing code to get around strict typing. A lot of my code would be a lot cleaner and generic if it didn't have to be riddled with Var to Data with hard-coded clusters and conversion to every datatype under the sun. You wouldn't need a polymorphic VI for every data-type or a humungous case statement for the same. It's why I choose strings which is the next best thing with variants coming in 3rd.

     

    Forgot about this thread for a month. Shaun, you might have misunderstood how strict typing works in F# (I probably explained it poorly). When you write a function in F#, for the most part you don't specify the actual types of the parameters. Instead, the compiler determines what operations need to be possible on those parameters, and then only allows values to be passed as those parameters when they support the required operations. As a simple example, a sort function might accept an array of an unspecified type, and the only operation that needs to be supported is comparison. The compiler will then let you pass an array of any type that supports comparison to that function. So you get the benefits of both generic functions (because functions don't specify parameter types) and strict type checking (because the compiler can determine at compile time whether the parameters are valid types). That's something I'd love to see in LabVIEW, too.

  10. If you want to program your device through a direct serial connection to the microcontroller, then yes, the microcontroller needs to be running some code (a bootloader) that knows how to interpret the data it receives and write it to the appropriate memory locations. Getting that bootloader code installed in the first place requires the use of an external programmer (another hardware device). Whether you need to do any additional processing of the hex file before sending it depends on the bootloader. If the bootloader knows how to interpret the hex file format, then you don't need to do anything. However, if the bootloader expects data in some other format, then you'll need code to do that conversion.

  11. My uncertainty stems from something someone said to me a long time ago on LAVA, that it was very strange that I branched the FPGA reference to two different loops, and that they had never needed to do that.

    Do you have a link to that discussion? There are so many things one can do with an FPGA, that I can imagine that in one application it would seem weird to branch an FPGA reference, whereas in another it's completely logical. One of the great things about the FPGA is that parallel loops are truly parallel and independent, so it can make sense (to me, anyway) to treat those loops almost as separate devices. Even if there's only one FPGA loop, I've written code where the host side forks the reference so that one host loop can handle the ongoing, routine operations (such as reading data from a FIFO) while another handles the occasional updates (changing a setting by writing to a front-panel element). At least in my opinion, if branching the FPGA reference to multiple loops lets you separate functions logically and results in clean code, there's no reason not to do it.

  12. Reviving an old topic here because I've been playing with F# a bit at work, and it's made me wonder - what if LabVIEW had a type-inference system similar to F# (or ML, or some other functional languages)? Meaning that the compiler can deduce data types at edit/compile time, but it deduces that type only to whatever level of specificity is actually required. For example, in a sort function, so long as there is some way to compare two items of the same type, the actual type doesn't matter. This allows you to write generic functions and still get the benefits of strict typing, and type correctness at compile time. It might be an alternate approach to OOP interfaces for LabVIEW. Of course it would change the look of the language - wires could no longer be colored based on type, since many functions would accept multiple types. I realize it's unlikely to happen in LabVIEW any time soon. Anyone else played with this aspect of functional languages? Comments or thoughts as to whether it could be done in LabVIEW?

  13. If that parameter is a string, why not just configure it as a string in the Call Library Function Node setup? Strings are always passed by pointer, and LabVIEW knows how to deal with it. Also, I assume you know what the maximum string length is. You should preallocate memory for that string, and pass it in on the input side. You can preallocate a string by initializing an array of U8 to the correct size, then converting the byte array to a string. Or, you can set the string length in the Call Library Function Node setup.

  14. Also, here's a tip: if you need to call a function in LabVIEW.exe using a Call Library node, just type "LabVIEW" in the path box without quotes. That's what NI does, and that way it doesn't rely on a specific installation path (so you don't need to use the path input) plus AFAIK it's platform-independent. Though I doubt calling functions in LabVIEW.exe (except through VI's provided by NI) is at all supported by NI, in case that's not obvious. (All of the VI's in vi.lib that I've seen using that method were password-protected.)

    I don't know about "supported" but calling functions within LabVIEW.exe is the standard way to do memory allocation when it's necessary to pass a pointer to a DLL function, and this is well-documented: https://decibel.ni.com/content/docs/DOC-9091

  15. But one question remains: Is there a shell of some kind I can connect to?

    No. And in case you start wondering, it's not like there is one that's hidden; there isn't one at all (even if there was, there wouldn't be any applications to launch from it so it wouldn't do anything). On some (maybe all) cRIOs you can plug in a serial cable, enable the serial console output, and watch the boot sequence in a terminal, but that's as close as you'll get. A real-time operating is much more basic and limited than what you probably think of as an operating system.

  16. Can you post the header file (the function prototypes) for the real functions you want to call?

     

    The prototype for the wrapper around the function that returns the datablock struct by value should be something like:

    void some_other_func_alt(datablock *db);

     

    Then you call that from LabVIEW, pre-allocating the datablock struct as a cluster and passing it (Adapt to Type) to the wrapper. Inside the wrapper, do a memcpy from the returned struct, into the struct that was passed by pointer. While it would be nice to avoid that extra memory copy, I'm not sure there's a way to do so.

  17. Edit: Sorry for my brainstorm idea. I forgot that I don't want it polling. I want the blocking feature.

    Blocking and determinism don't really go together. What are you trying to achieve here? Do you want the deterministic loop to wait until it receives a new value (at which point it won't be deterministic anymore), or do you want that loop to run at the same rate consistently regardless of whether there is new data? If the latter, one approach might be a functional global variable, with the "Skip Subroutine if Busy" option set (and add a boolean output that confirms the data is good before you use it, in case it is skipped).

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.