Jump to content

MarkCG

Members
  • Posts

    147
  • Joined

  • Last visited

  • Days Won

    17

Posts posted by MarkCG

  1. I used to think the future of LabVIEW was bright. I thought CompactRIO was amazing and thought everyone would want to use it for all kinds of control systems. LabVIEW was just intuitive to me, jobs and well paying contractor gigs were plentiful and hitched the first decade of my career to it. Got to do some interesting things but decided it wasn't such a smart idea to be locked to the fortunes of one company, whose decisions didn't really make sense to me. Yes LabVIEW will be around in the same way LISP is. 20 years from now people will reminisce about visual programming and how advanced it was and all the things you can do with it that other languages still can't. It's not going to save it. Too niche, too proprietary, too different.

    If you still have a few decades left till retirement I would absolutely learn python at the minimum. If you like making machines do things there is the whole industrial automation world that is adjacent to LabVIEW / Test and Measurement where skills can transfer over. I personally spent some time learning how to use Beckhoff's TWINCAT platform and think that's a great entry point.

    • Like 2
  2. On 12/8/2020 at 7:48 PM, Maciej Kolosko said:

    If NI would consider unlocking the ability use NI FPGA with the ability to deploy to non NI Hardware ... I think this is where G absolutely would take off like wildfire and be used on millions of IoT devices everywhere in the world that are powered by and ARM7 + FPGA module... but as it stands now if you use NI FPGA you must deploy on a target you've purchased from NI.

    I would love to see NI do this. It would open up a huge new field of application for LabVIEW. I thought NI couldn't do this because of their agreement with Xilinx, to prevent NI from taking over Xilinx programming tools' share of the market. It would be a real chance for LabVIEW to survive and thrive.

  3. Hi Zofia we talked before but I'll all add my bit here to see if anyone else has comments. EtherCAT is very powerful but so many of those features are held back by the NI EtherCAT master. For example you can get diagnostics on each individual link slave to slave but NI doesn't show that. No official support for topologies other than line or ring. I have gotten it to work with an EtherCAT junction but nothing displays correctly in the project. No hot-connect groups like with TwinCAT. Configuring slaves with CoE is difficult. I use TwinCAT for that. Just the bare minimum is there in the master to get it working. I use the Beckhoff EP series modules to vastly reduce the wiring in my control systems, because I can place the modules next to the things bein controlled and run a cable directly to them, often with jsut off the shelf connectorized cables. This beats the traditional way of bringing everything back to the control cabinet with your compactRIO or PLC in it. I don't think NI is that interested in industrial control and automation and that's why I'm slowly moving everything toward beckhoff/TwinCAT. Just a much better system for what I need to do

    • Like 1
  4. I have seen this too -- looking for RT Get CPU loads.vi in the wrong place, breaking the VI. Not sure how LabVIEW gets into that state but it's been a bug for years. Deleting all RT utility VIs in your code, then adding them again seemed to fix it.

    • Thanks 1
  5. Thank you Rolf and Tim. This crash has seemed to happen exactly one time but maybe with enough logging I'll figure it out. My other option is to just convince people get rid of ZMQ completely for this application and use UDP or TCP connection. The application is just sending data to one server anyways so ZMQ is not really adding much functionality anyways

  6. Hi all,

    I am running a real-time LabVIEW application that calls into compiled .so file. I am getting occasional crashes of the entire system, and the error log shows a segfault

    #Date: Mon, Jun 29, 2020 04:59:16 PM
    #Desc: LabVIEW caught fatal signal
    18.0.1 - Received SIGSEGV
    Reason: address not mapped to object
    Attempt to reference address: 0x0x4
    #RCS: unspecified
    #OSName: Linux
    #OSVers: 4.9.47-rt37-ni-6.1.0f0
    #OSBuild: 264495
    #AppName: lvrt
    #Version: 18.0.1
    #AppKind: AppLib
    #AppModDate:

     

    am I correct in blaming this on the code in the compiled .so file (which happens to be a ZeroMQ library) ? There really isn't a way for LabVIEW code to create a segfault AFAIK-- you don't have direct access to memory.

    Also posted on the darkside but not expecting any response there.

  7. 14 hours ago, smithd said:

    This is kind of an interesting concept to me and its one I've been curious about. Just based on my own anecdotal experience it seems like the thing holding labview back is less the UI of the editor and more the combination of "not a real programming language", python being the thing taught in schools, relatively high costs, limited developers and general unavailability of a great many experienced labview devs outside of a pretty insular community, and the deployment/runtime situation (labview doesn't get a free pass like dotnet, javascript, and in most cases java).

    I'm genuinely curious if there is data out there which justifies the things nxg is focusing on.

    I remember reading somewhere that the idea behind LabVIEW was to make data acquisition as easy as creating a spreadsheet. That is anyone with some ability to use a computer and understand basic math could create something that worked for their purposes without being a programmer. Gradually more and more complexity and capability was needed and the professional LabVIEW programmer emerged, similar to how professional Excel/VBA programmers arose in the financial industry. 

    But like everything the need for the professional LabVIEW programmer was a product of particular historical circumstances, and I think history has moved on. Since many big companies have invested a lot of money in NI hardware we will be some demand for LabVIEW programmers to maintain these system, but it will taper off over the decades. I think it will be like COBOL -- you'll have a few crusty old guys wading into ancient codebases to make fixes and smaller changes but no greenfield development.

    I may be wrong but I think NXG is NI's last gasp attempt to stay relevant and that it will fail. The attitude in most companies I have experience with is that "not owning your source code" is a major, major problem, and I don't see that changing. LabVIEW is seen as a sometimes necessary evil only.

    If NI wanted LabVIEW to stay relevant for years they would make it open-source, and keep selling hardware and software add-ons for it. But they know their business better than I do and what's good for me isn't necessarily good for them.

     

     

  8. 7 minutes ago, paul_cardinale said:

    1. I like to use physical quantities when the values represent physical quantities (such as is common with device drivers).
    2. When the code to run a control starts getting complicated, and the owning VI is already complicated, I like to encapsulate the code that runs the control into an XControl.
    Am I doomed?

    Is there anything like Xcontrols in NXG? I didn't think there was

  9. Interesting, Beckhoff controller with NI hardware is the inverse of what I did. Going forward I can't see a situation where would use the NI EtherCAT chassis unless I needed to take advantage of the custom FPGA programming or maybe needed the special shock/vibe ratings. I really got shocked when I discovered how many more practically useful features the Beckhoff terminals and EtherCAT boxes have vs the NI C series modules and how cost effective they are. For example 50/60Hz bandstop filtering is selectable on all the analog inputs. This is something you have to implement in FPGA for  C series or buy a specific module with filtering (9209?).  Or short/open circuit detection on digital output modules. Using the "EtherCAT box" modules eliminated large amounts of control panel wiring and harnessing, and still cheaper on a by module basis.

  10. Thanks Rolf. The NI EtherCAT master leaves A LOT to be desired but I've successfully got it to work with Beckhoff EtherCAT terminals, even in star topologies. Fortunately I've gotten away from needing to deal with 5 different communication protocols to talk to random devices lately, as I've been able to control what hardware is used . Did you ever develop code in the TwinCAT/Visual studio environment and deploy it to a Beckhoff Industrial PC? How was that?

  11. Hi all,

    I was wondering if anyone here has had experience doing machine control with both LabVIEW and TwinCAT (not in the same project or machine) . I've been working with Beckhoff hardware and I'm impressed, and I'm curious about how the TwinCAT software compares to LabVIEW as far as ease of use, stability and bugginess, and the power and flexibility of the programming languages available, that is how versatile it is compared to a compactRIO, where it seems you can do pretty much anything. Also how does it compare to LabVIEW+ LabVIEW RT + LabVIEW FPGA software stack from a cost perspective? 

     

  12. 10 hours ago, viSci said:

    Yes I have been using it quite a bit on a roadway monitoring project with 50 cRIO's.  I use it to handle all of the cRIO system state and tag data publishing.  We are still in the early stages of testing but it seems to be working very well.  The LabVIEW version of RTI DDS is a subset of its full capability.  RTI has been slowly adding features but it still does not support basic things like events.  Judging from the forum posts, the toolkit is largely unknown in the LV community.  I think if more people adopted it, it would garner more love and attention by RTI.

    what kind of data rates and number of tags can it handle?

  13. On 12/26/2019 at 10:29 AM, smithd said:

    I'm confused by this question -- there is an available zeromq library if that fits your needs. Its not perfect but its good, and its up to version 4.2.5 (current main lib release is 4.3.2).

    If you don't need linuxrt support you can just use https://github.com/zeromq/netmq

    we got it to work on cRIO

    https://sourceforge.net/p/labview-zmq/discussion/general/thread/87780372ed/

  14. On 10/15/2019 at 9:58 AM, Aristos Queue said:

    Y'all sometimes ask me, "AQ, why are you so eager for LabVIEW NXG?" Language extension is the biggest answer. The code layers of NXG are far more amenable to language extensions. I've got an 80-slide PPTX on actors as first-class citizens: no queue management, no classes for messages but retaining type safety, no weird error codes, easy handling of parallel loops without custom stop signals, direct execution testing, debug monitoring... but I just don't see it happening in LabVIEW. The compiler simply isn't sophisticated enough to take a high-level diagram and generate the code implied by it -- not enough separation between source model and execution model. NXG retains the existing execution model and builds a separate source model, which means the transforms possible in the compiler are waaaay more extensive.

    And mine is not the only proposal on the table. I work with a team full of people who have alternate diagrams that bring the actor model more directly in line with data flow, or introduce various other types of dataflow. Did you see the Multi-Rate Diagram that was in LabVIEW Comms when it released? Sadly discontinued for various reasons, but it provided a totally different set of rules for how an upstream node triggers a downstream node. Likewise, Rebar is a different model of computation that can compile in the NXG compiler.    You've seen what we can do with channel wires already in LabVIEW... we can go sooooooooo much further in LabVIEW NXG.

    PS: I was torn in the poll... "Do I choose 'Actor Framework' or do I choose 'my own framework'?" 🙂

    I'm interested in learning more about it-- can we see the presentation? I have never installed NXG but this sort of thing is what would sell me on it. My superficial impression has been that it's, at this point, LabVIEW with vector graphics.

  15. On 9/26/2019 at 9:24 AM, G-CODE said:

    Tell me about it! I think I've written seven of them so far and every time I create a new one I have to reference my previous modules to help me get the new one working. It's not trivial.

    For that reason I moved complex logic outside of DCAF to a higher level. DCAF manages I/O like scan engine, serial/modbus instruments, calculated tags. I wrapped DCAF in a subsystem that all other subsystems must use to access I/O. Since DCAF exposes I/O data through NI's CVT, I make sure not to use the CVT elsewhere throughout my application as that would bypass the protections I put in place for writing to tags.

    Makes sense. To me it seems DCAF if very close is so close to doing everything I want it to, I'd hate to add yet another layer on top of it.

    Honestly my feeling is that all these frameworks are great but they are all using a lot of complexity to transform and stretch LabVIEW into something completely different from what it was meant to do originally. There has to be a better way, were Actors , state machines, tags vs command vs streaming data, and easily handling thousands of I/Os are fundamental concepts of the language.

  16. 5 hours ago, smithd said:

    Yeah, I made an aborted attempt (https://github.com/LabVIEW-DCAF/ModuleInterface/tree/StreamingAndEvents  and   https://github.com/LabVIEW-DCAF/ExecutionInterface/tree/StreamsAndEvents) but...then I left NI. The nature of labview merging is such that those branches are probably now useless :(

     

    That's great, I had no idea! I'll take a look at it at least and see if I can maybe come up with something similar.

    • Like 1
  17. I use DCAF for most of the things running on compactRIO. I like it, very easy to write static modules, much harder to write dynamic ones where it works correctly with the configuraton editor, though I have done it. Nice to be able to manage tags with CSV files.  My biggest gripe about it is that there is no built in events / messages/ triggers. So if you have a module that needs to perform some sort of sequence that you want to initiate on user command, you have to do something hacky like increment an integer tag, or change a boolean tag's state. I guess you could also do that with RTFIFOs or queues, sending this kind of command data from outside or even inside the framework but I have not invested any time in that, and seems like it would be hard to debug.

    • Like 1
  18. On 6/5/2019 at 8:06 PM, smithd said:

    Its similar to flattening a cluster, except its cross language. It accomplishes this by having scripts which take a message definition and generate code in that language. This makes it easy to send a protobuf message, which might be represented in labview as a cluster*, to C or java or python or go or wherever. Its primary benefit over something like json is a slightly more extensive type system and speed.

    This won't get you any of what I described but if you just need super basic support for generating a valid message manually: https://github.com/smithed/experiments/tree/master/Protocol Buffers

    It would need a ton of work to actually support scripting. Doesn't seem like there is enough of an advantage.

     

    *I'm pretty sure it always has to be a class actually due to things like optional data values.

     

    Daniel, thank you so much for sharing this and bringing it to my attention. So what someone would have to do is create a program that reads in a protobuf specification text file, and then uses VI scripting to create a VI that encodes a specific message, using the VIs in the library "protocol buffer encoder".  Seems like the sort of problem that you can break down recursively-- once the protobuff specification is parsed into a tree. you start at the root and recurse down the tree till you get to a leaf node. Generate code for all the leaves, then as you recurse back up wire the leaf code together. 

    Easier said than done though...

     

       

     

     

  19. how are protobuffs that different form flattening a LabVIEW cluster ?  It seems like I may be implementing LabVIEW protobufs at my job. I keep checking this thread hoping someone will have done it already, because it really has very little appeal to me as a project, but I may have to. 

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.