Jump to content

ShaunR

Members
  • Posts

    4,871
  • Joined

  • Days Won

    296

Everything posted by ShaunR

  1. ShaunR

    Dear NI

    It is presented as G code...when loaded into LabVIEW. All that is being proposed is for the on-disk representation to be in a format that normal SCC can deal with rather than a proprietary binary format that prevents us from incremental differencing. Projects et. al. have already gone this route. The suggestion is to extend it to the VI's themselves. We know that VI's *can* be represented in forms such as XML from some of the under-the-hood tools we have seen. The current state of affairs in how SCC's deal with LabVIEW source is the equivalent of using SCC for C/C++ object code, and the NI graphical solution is far too manually intensive. As for the rest of Santa's list. Most of it is "gibs free stuff"
  2. Agreed. I was just staring at the MDI toolkit. If I can get the diagram windows reference I can probably create a region where we can contain multiple VI FP's and their diagrams along with things like tile, minimise etc. Might have to create a hack by using the Subpanel's ability to show diagrams but maybe worth having a look at for giggles.
  3. Interesting... I don't like the single VI view. I like to view multiple VI diagrams and panels when debugging and editing. I also don't like the menubar only reflecting the currently selected item ala Mac - it trips me up all the time. However. The toolbars are another matter. You can kind of do the above by docking palettes and Project Explorer to the desktop sides (which is what I do). What you can't do is dock to the top and bottom of the desktop or dock the context help or dock windows to each other. I really like the way Codetyphon/Lazarus works for toolbars etc. Each is a separate window (like LabVIEW toolbars) but you can dock and nest. So. Out of the box you have something like: You can grab the yellow bar below the title-bar and dock it to other windows (the blue shaded area in the source view shows you where it plans to put it, for example). You can do this with any window (there are a lot of them) to create split bars and even tabbed, split bars. Ultimately. You end up with (my preferred layout) which is: It's only a tabbed page view in the middle which is a pain switching back and forth between FP design and source but the way it handles the ability to make your own IDE layout is great.
  4. I was once told by an engineer (I think he might have been a Senior Engineer) that it was ok and even desirable to allow broken trunks in the Source Control. He read it in a book, apparently. Glad he wasn't on my project.
  5. Actually. Yes. Elitist zealotry deserves mocking. Because it is not what is being tested in a CLA exam: How a CLA achieves that is not dependent on OOP.
  6. I've never read such a load of clap-trap on this forum.
  7. You can kind of do that with VI Server, Wireshark and a lot of determination It's not very secure and not at all documented though. Now. If we could connect to VI server with wss and ... <thoughts for another time>
  8. Deal. I would have also have accepted Streams, .NET, Citadel and a few others
  9. If they removed classes I wouldn't be too upset Good trade-off.
  10. Well. I settled on string messages with a syntax similar to SCPI .The issue then becomes how to disassemble the strings and turn them back into the types. When I demo'd the above concept using "Named Events" (Anyone remember the HAL Demo?) I had to use a split-string VI with case statements in the Event structure to do this. That could be alleviated by LabVIEW's in-built cluster mechanics in the Event Structure but then we move away from strings to clusters. So whilst strings would flatten the interoperability, clusters would bring us back to the lack of interoperability that you describe. I've maintained for a long time that strings are the ultimate variant and they can be transmitted across networks and/or to other languages to boot. In this paradigm I chose string formats of the form SENDER>TARGET>OPERATION>PAYLOAD (e.g. "MAIN>SOUND>PLAY>FILE>operatio.wav"). The first two elements are the source and destination - more specifically the Sender's VI name and the intended Recipients VI Name. That could be hidden and removed with the proposed event implementation above since LabVIEW would have that knowledge and it is used used purely for filtering the messages (events are one-to-many). It also meant that I moved away from QSM's to EDSM's (Event-driven State Machines) and "Services" and "Listeners". Services provide application-wide functions and access to resources that were typically singletons such as Comms, logging, database access , Sounds etc while Listeners simply observed the messages and do their own thing, often utilising the Services. So I guess you could consider Services and Listeners as sub-classes of the generic "Actor" term. The result of that is you end up with standard Services with no static linking to the application-pluginable, if that's a word - and could be imported to other projects just by laying the icon in the main diagram or dynamically loaded (dependencies excepted, of course). Once loaded, the developer just had to send it messages to use and subscribe to messages to get info and feedback. They could also be tested and used in isolation which makes LabVIEW life so much easier. As an example. you can clearly see the Services (along the top) in the following application that I started a while back (and may pick up again with the news NXG is shelved). The Listeners are the pages in the Tab control.
  11. Thinking a bit more about this. We already have an editor where we could add the messages rather than usurping the dynamic terminal. After all. It's really just pulling in and simplifying dynamic events so that no primitives are needed and adding a viewer for the message paths. The only outstanding question is what do we do about generating the events to remove the last primitive (Generate User Event)? The more I look at this, the easier it becomes to implement
  12. It's a good idea but it's an awful implementation. You can smell that it's awful from the amount of infrastructure replication from diagram to diagram so eventually it all looks the same with minuscule differences. That's *not* reuse! Scripting isn't the answer here. A better approach would be similar to the "Named Events" that I once played with. Of course, I don't have the ability to draw wires between the producers and subscribers in there that Jeff can do, but the Event Structure becomes housing for your code populated with the appropriate events and data terminals dependent on your message-either in another loop on the diagram or in a completely separate VI (we'll call Actor ). So. With the above in mind. Say we didn't even want to have the VIM node (which is just really defining the message or, more practically, the user events and data terminals that appear in the Event Structure) and we could just pop-up on the Event Dynamic Registration terminal and define our message. That could then be available throughout all our code (we can argue about scope later). So Like Jeff, 1 is the "expanded" code and 2 is the iconised abstraction. Now. Of course. Jeff uses wires and we could use wires here too to represent the message paths between the loops and icons if we were NI devs. But if we are going to use another interface, lets just use it to view the message paths. Rather than straight-jacketing the developer to the alternate view and lot's of [slowly scripted] boiler plate code, we are viewing the actual programmed code and can debug the messages. (the blue dot is the probe and the probe content are on the right). All the complexity has been hidden and simplified-which is what we want. Now we have a system that is in the same spirit as the hierarchy and class windows and very little boiler plate code. Additionally, "actors" can be launched dynamically and we can still debug them. The LabVIEW developer only needs to lay down a loop with an event structure and pop-up on the Dynamic Registration terminal to create or subscribe to messages. When subscribing, the events and the data types are propagated in the structure just like normal user events. We could even use the Dynamic Registration on the right side to send messages back to the producer or other subscribers. If the latter were the case, then the producer in the examples shown, could simply be a value wired to the right-hand side Dynamic Terminal of an event case (in the timeout case with 10ms wired) instead of the Generate User Event shown.
  13. Unicode, as I outlined earlier, is the only thing in that list that needs to be prioritised. Comparing 2 VI's in LabVIEW becomes moot if they switch to XML (as I also outlined). .NET can go hang - it's not cross platform - and we can already pass NuLL pointers, it's just not a LabVIEW type.
  14. Oh. that reminds me. I really should clean up this xcontrol (the toolbars). I've also got a Tab Control one too, if anyone is interested.
  15. A decade is about how long things need to percolate through the learning institutions to reach a critical mass of employees skilled in it. Relatively speaking - yes, it is "whiz-bang" when compared to Javascript. C++ et. al. Even Rust is 10 years old. C# is 20 We really haven't moved on much from about 30-40 years ago when there was truly spectacular explosion of ground breaking invention and innovation in software and hardware by some very special people...and one of those was LabVIEW. We've tweaked, revised and added spokes to the existing wheels on the back of huge leaps forwards in hardware technology that has hidden the industry's mediocrity.
  16. You don't need a Ph D thesis. Some people thought they could do <INSERT PROJECT HERE> better with the latest whiz-bang stuff and convinced management they could do it. I've seen it a hundred times.
  17. Not really general purpose (yet?) but you mentioned IoT so Node-Red is worth looking at for the not-too-distant future.
  18. Considering the distribution issues with Linux variants - and often within the same variant - where even things compiled on a different machine often won't work unless everything is "just-so"; that was at best a pipe dream. The main reason I stopped supporting Linux for ECL was for this very reason. The only safe way to build executables is on the target Linux variant - preferably on the same machine - and for that you would need the full LabVIEW. It's one of the main reasons why interpreted languages are so prevalent on Linux platforms (PHP, Python, Javascript etc).
  19. NXG had an inherent fatal flaw. It wasn't (and would never be) cross platform. That was exacerbated by the slow progress of development to the point that NI lost the T&M market to Python. It was a good decision, rarely seen by large multinationals. The good news is that they figured out how to represent VI's as XML properly and convert the VI format to it. We have already seen that format creeping into LabVIEW in the form of projects and libraries but the VI's themselves remained propriety. Porting the ability to represent VI's in XML would be a huge improvement to LabVIEW enabling source control tools to work properly-the bane of LabVIEW for over a decade. The other much needed LabVIEW improvement is internationalisation. We have had UTF8 for a long time but have been unable to display it. Instead, we had a *hack* that never worked right and never would with the resources allocated to NXG. It's been fraught to try and display it and other means were found. If the lessons learnt from NXG means we can display UTF8, that would be another huge improvement to LabVIEW and, I think, easier than UTF16/32 without impacting current string manipulation. I'm ambivalent about the IDE. It works fine enough for me to develop but it is a poor user interface for applications. I got around this user interface problem a long time ago with Websockets so, although a bit more effort, it's not a big issue anymore. I hated the restrictive framing of NXG and it seemed to promote huge monolithic diagrams with the zoom facility. The standard LabVIEW IDE encouraged me to make small, encapsulated code modules and I could see many parts of the architecture at once when debugging. If shelving NXG means the first two issues get addressed by either more effort or porting; those alone would mean, IMO, LabVIEW has a future - a bright future. It would once again be a cross-platform equivalent to all the other text languages with all the advantages *over* the other languages that it originally had.
  20. AQ leaves, NXG gets shelved. Insert your rumour-mongering and conspiracy theories here
  21. You have obviously never done Agile Development proper then since it is an iterative process which starts with the design step just after requirements acquisition. It's not a fear of failure, it is a fast-track route to failure which usually ends up with the software growing like a furry mold. But anyway. It's your baby. You know best. Good luck :)
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.