Jump to content

ShaunR

Members
  • Posts

    4,871
  • Joined

  • Days Won

    296

Everything posted by ShaunR

  1. This is the main reason ActiveX and .NET are banned from my projects. HTML is the output of choice, currently. You can even use simple string replace on keywords in report templates for most things. You don't have to be a web developer but if you can palm it off, erm, I mean, outsource it to IT, then that's a bonus. It also means that later, with a bit of javascript, you can make them into "live" reports and interfaces.
  2. Thank you I guess I'm learning Hows this for abstract?...... OOP can superficially describe "things" in the real world but is atrocious at modelling how "things" change over time. Damn. Now my "Fruit" base class needs a "has worm" property
  3. OOP obfuscates and makes code less readable. If you use dynamic dispatch, you even have to go through a dialogue and guess which implementation. Abstraction does not make code more readable, it hides code and program flow. It may seem more readable than you had originally but that is a relative measure and you know your code intimately. Tenuous at best relying on an assumption that with your white-box knowledge of the code and object oriented expertise you were able to identify optimisations. Optimisation is a function of familiarity with the code and recognising patterns. It is not the paradigm that dictates this, it is experience. Again, tenuous, claiming that abstraction enabled optimisation. See my previous comment. Code reuse is a common OOP sales pitch that has been proven to be false. It is LabVIEW that is cross platform so of course your code is cross platform. Again. This has been proven to be incorrect. The usual claim is that it is slower to begin with but gains are realised later in the project and so overall it is faster. Project gains are dictated more by discipline and early testing than paradigm. 1 week agile sprints seem to be the currently accepted optimum. Another sales pitch. See my previous comment.
  4. Then I cannot rewrite in it classic LabVIEW and it is just an argument of "my dad is bigger than your dad". All my arguments are are already detailed in other threads on here (which you refused to let me reference last time). You think it's great and I think not so much. I outline real implications of using LVPOOP (code size, compile times et. al.) and you outline subjective and unquantifyable measures like "elegance". There is nothing that can't be written in any language using any philosophy. The idea that a problem can only be solved with OOP is false. It boils down to what is the efficacy of achieving the requirements and OOP is rarely, if at all, the answer. After 30 years of hearing the sales pitch, I expect better.
  5. So. Pictures are now code? I would forgive a nooby for that but, come on! FWIW. The Classical LabVIEW equivalent of dynamic dispatch is a case statement and at the top level would probably look identical to the first, if it was contained in a sub VI. Apart from that....very pretty and don;t forget to edit the wire colours for that added clarity Even if the caller has functions with with different terminals?
  6. It's a small and niche sector. You have students being taught by students and the experienced ones have either been moved on to management or are a key person and dead-mans shoes. I, along with you are jaded. When you have seminars and talks consisting of nothing more than someone relaying their bumbling through to an epiphany,you know there is a dearth of experience. There are lots of architects and students and very little in between and those architects only want to do the design, not the coding. The good news for the OP is that because it is a niche market; specific knowledge makes way to "potential" knowledge and an unsaturated market opens more opportunities with a lower bar to entry.
  7. Generally it is LabVIEWs implementation of OOP. The poor compile times, the complexity, the maintainability, the ballooning of the code base and the bugs. Classical LabVIEW is easy and arguably produces more robust code that is easy to understand for engineers rather than CS academics. I often talk about "pure" and "applied" programmers (an analogue to pure and applied mathematics) and Classical LabVIEW is great for applied programmers. OOP is unnecessary complexity in all but the most fringe use cases and it has sucked all the development resource of the language for features that could have benefitted how a vast majority of production code, that does real things, is written. But no. Interfacing with the windows subsytems, that I'm used to never uses objects. It uses functions in dynamic libraries that take data arguments. Opaque pointers to objects is the quickest way to a GPF and in LabVIEW that means taking out the IDE too. It is only when you get to .NET that you forced to start interfacing with objects and I think you know how unimpressed I am with that-it's banned from my projects. If I want to use .NET I would use C#, not LabVIEW-one advantage of being a polyglot, so to speak, is that I'm not limited to one programming language and can choose the best tool for the job.
  8. Without successfully being able to convey the fundamental difference between LabVIEW and, say, C[++] or pascal and the many other procedural languages that OOP was proffered as a solution for. You should perhaps put my comments to one side while you fill out the feature set of the API. I will leave you with this, though. Why isn't a VI an object?
  9. It doesn't support it at all because messaging is a method of breaking dataflow and, if I am feeling generous, it is an equivalent of dataflow with one input to satisfy. The idea that dataflow is "data flowing" - moving from one place to another - is a simplification used to teach the concepts. In fact, it is about "state". What defines a dataflow language is that program execution continues when all inputs are satisfied. Execution state is manhandled in other languages and concepts, by ordering function calls (procedural) or unwinding a call stack (functional) and still proves the main problem of them today. This is why we say that dataflow languages have implicit, rather than explicit, state. Specifically "execution state" is implicit rather than "system state". From this perspective, you have broken dataflow for excellent reasons and are proposing to add back it back in with added complexity so that it "looks" like dataflow again - a problem of your own creation like so many other main-stream, non dataflow,concepts, when applied to LabVIEW. The solution will be a service, actor or whatever you want to call it, that has visibility to global execution state. In classical labview we would just call a VI as non-reentrant from the three loops and allow the implicit nature to take care of ordering and progress of the loops. However. I understand the desire for "completeness" of your API and that's fine. However. Futures are a fix for yet another self inflicted problem of OOP dogma so I don't agree that there are no OOP concepts involved. In LabVIEW, futures are an architectural consideration; not one of implementation.
  10. If you use Yocto then one of the formats for output is a VM image aside from the more usual *.img and *.iso. The NI repository has a Yocto recipe for building NI-RT. After 3 days of fighting with Python and compiler toolchains, I popped my cherry with a Raspberry PI and custom Image (which are included in Yocto and all worked fine). Onwards and upwards to NI-RT. I got to the stage of finding out that everything turned to crud because Yocto isn't backwards compatible (I had 2.2 and NI use 1.8) so rage-quit at that point and posted on here after seeing Daklus comment If you want to really be depressed. Follow the link in the readme.md of meta-nilrt for "Building the Layer" where all your questions will be answered.
  11. Ahh. But I am all the way I need enough to merely debug SQLite, OpenSSH and OpenSSL on some semblance of the NI-RT target. I care not for DAQ, VISA, Vision or anything else. Since this is for internal debug. I don't really have any open source obligations either, unless I distribute the VM. The aforesaid binaries I use have already been researched and I comply with the licences for the end products. The new plethora of licences of the RT-Target will cause me to research each individually for compliance. The risk of overlooking a clause that predatory licence practices rub their hands with glee at, means I would probably not if it is more than some arbitrary number of hours or days. That effort would also have to be replicated every time there is a new release and I would probably end up spending time writing tools and systems to catch changes.The Linux community seem happy with this arrangement (even though Linus Torvalds isn't). I would therefore rather just not use it and have not used it prior to windows 10 for these exact same reasons to the extent that I withdrew Linux support even though most products work with it. Thus I will be prevented from sharing the fruits of that [VM] labour before it has even begun. That is the tragedy of the commons of open source. Of course. I could just install the old-ass version of Yocto and decimate my version 2.2 which took 3 days of pursuing rabbits down very deep holes to set up and get working . That wouldn't help me preventing others from the same pain of getting hold of a VM, though.
  12. Good idea to move. Thanks. Maybe move the other responses here too? I really don't want to pollute the other thread. The problem is that if you do manage to get an image from a device, it probably won't be compatible with a VM. It may be ok to put on another identical platform, though. When you compile Linux stuff, there is a whole tool-chain dedicated to figuring out what system you are compiling on and then setting a shedload of compiler switches and includes (which never work out-of-the-box, in my experience). Cross compiling is even worse. Linux is the worst platform to maintain.. Of course. NI could supply us with one then licencing, obnoxious build tools et. al. wouldn't be a problem and we can get on with the proper programming instead of sorting out 1980s build systems.. As far as licencing is concerned. Linux doesn't make it easy. The onus is on us to identify all the different licences and then comply. Since distros are an amalgam of software written by a multitude of crazy militants; each have different ideas about who can and can't use their software; it is a minefield. The Linux kernel on it's own is a known quantity (GPL) and if they have produced a custom kernel, that is sticky. But who knows what what NI libraries and support binaries from 3rd parties have been added (well. NI do).
  13. Why would you want a dataflow construct when the language supports it implicitly, then? Unless it is to fix the breaking of that dataflow because of the LVPOOP ideology. However. If you are trying to make a distiction between OOP and OOD. Then I am in agreement since OOP is not required for the latter.
  14. You haven't noticed? It probably has something to do with being one of the 5%.
  15. Have you built a VM with the NI-RT Linux? I had a go but they are using an old-ass version of Yocto.
  16. It doesn't matter. If it's for communication then prepare the listener for your definition (whatever that may be) and then use that and move on to important things about the code-just be consistent. I get fed up being asked to ponder philosophical significances in OOP.
  17. Be aware when benchmarking globals that their access times are heavily dependent on how many instances there are and whether reading and/or writing. They deteriorate rapidly as contention for the resource and copies of the data increases..
  18. Place the polymorphic VI on the diagram and click the label at the bottom. A menu with all the functions will appear which will enable you to choose from all the functions. Copy and paste the VI (or drag with the mouse+CTRL to create a copy) and choose another function as before to create another function.
  19. You don't need to defend yourself on some arbitrary forum. The relationship (or lack therof) between yourself and NI is between you and NI alone and it is up to NI whether they want to challenge or defend their IP. Questioning a companies' integrity in public is highly unprofessional and you do not need to respond.
  20. I have several use cases in mind with the usual software distribution and distributed databases/file systems. A few others too that are closer to what you are describing but they are more a case it could be used but there are probably better ways-I would know more later. Service discovery is a means to an end for DHTs if you consider supplying a key-value pair a service. The difference between Kademlia and Chord is basically how they search and contact providers of specific key-value data with an expectation that someone will but without caring who. If one of each service is expected in a system then I'm not sure what would be gained and there would certainly be much faster ways but if you wanted to spread configuration data amongst all services for fail-over (effectively a distributed database) then maybe. Off the top of my head. You could probably use the routing table from a DHT in some way but it's a big "depends".
  21. Has anyone worked with, implemented or investigated DHTs in LabVIEW? I'm particularly interested in Kademlia/S/Kademlia and Chord so would appreciate any input specifically about those, but any other DHTs that have been played with would be great for discussion.
  22. I'm not sure what you mean by "entirely offline". Javascript libraries can be loaded in any browser off of of the local file system so online servers are just the delivery mechanism for the Javascript code. Dynamically updating the variables in that Javascript has a number of options from Websockets, WebRTP, the local NI Webserver and so on. If push came to shove, you can use the .NET Internet Explorer browser control on the front panel and pretend it is just a graph control.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.