Jump to content

ShaunR

Members
  • Posts

    4,882
  • Joined

  • Days Won

    296

Everything posted by ShaunR

  1. Only if you compile it to be so, then perform back-flips in your code.
  2. Nice write up. I was going to write some examples but for the life of me I couldn't think of one real world problem that it solves . I keep looking at those functions and coming back to this every couple of years in case I've missed something but every time get stumped by by the per node instance nature and being unable to pass a parameter into it Most modern APIs use opaque objects/structures and it is these we need to clear up rather than the function call instance. I guess it is meant for managing thread safety but we are concerned with a purely IDE event so we can unload a resource as the final operation. It is a design-time problem alone. The classic requirement is to prevent error 5 when aborting a SQLite query and requiring a restart of LabVIEW to close the handle. I can do this by installing a "monitor" into the IDE but it's an awful solution. I can't think of any way to utilise these features for that use case without an intermediary - you can't even [object] reference count .
  3. I have found this not to be the case especially if the alternative is a statically linked "middle layer" where you have to rely on the developer to release a new one whenever the other libraries are updated. I've found conditional statements a superior solution when the main libraries are already supplied by the developers or operating system. We need a definitive guide to using the "Instance Data Pointer" which can alleviate, if not remove, this.
  4. Yeah. I looked a bit more and it's looking like the image wants to create a USB device as part of the install and, IIRC, VMWare has problems with USB boots.
  5. When I visit lavag.org and I'm not logged in, I'm greeted by the page below (Chrome, Firefox, IE11 and Opera.) Once logged in the pages display normally.
  6. Yes. It is a "recovery disk" so requires a pre-existing install. It fails "provisioning" if you try to install from scratch in VMware.
  7. The only time you usually see an error 56 on a send is when the TCPIP buffer is full. Error 66 is a normal server disconnect. There are a couple of reasons that you may get error 56 on a send but the usual one is sending too quickly, say, 2MB/s on a 10mb TCPIP port. Other less frequent are when the connection goes deaf and mute but the connection is still established (usually happens with transparent proxies) and NIC problems (especially with multiple cards.)
  8. I don't blame them for this. Linux is a house of cards and probably the best way to have some confidence that something might work from version to version or distro to distro is if you statically include your entire execution environment (like Steam). Even then it's a 50/50 chance that some nutter hasn't broken an ABI that you overlooked.
  9. Interesting..... If it is from Pharlap to Linux. then it is not an "upgrade". Hmmm. Dual boot a PXI rack? That must be worth a few hours playing around with
  10. This is the main reason ActiveX and .NET are banned from my projects. HTML is the output of choice, currently. You can even use simple string replace on keywords in report templates for most things. You don't have to be a web developer but if you can palm it off, erm, I mean, outsource it to IT, then that's a bonus. It also means that later, with a bit of javascript, you can make them into "live" reports and interfaces.
  11. Thank you I guess I'm learning Hows this for abstract?...... OOP can superficially describe "things" in the real world but is atrocious at modelling how "things" change over time. Damn. Now my "Fruit" base class needs a "has worm" property
  12. OOP obfuscates and makes code less readable. If you use dynamic dispatch, you even have to go through a dialogue and guess which implementation. Abstraction does not make code more readable, it hides code and program flow. It may seem more readable than you had originally but that is a relative measure and you know your code intimately. Tenuous at best relying on an assumption that with your white-box knowledge of the code and object oriented expertise you were able to identify optimisations. Optimisation is a function of familiarity with the code and recognising patterns. It is not the paradigm that dictates this, it is experience. Again, tenuous, claiming that abstraction enabled optimisation. See my previous comment. Code reuse is a common OOP sales pitch that has been proven to be false. It is LabVIEW that is cross platform so of course your code is cross platform. Again. This has been proven to be incorrect. The usual claim is that it is slower to begin with but gains are realised later in the project and so overall it is faster. Project gains are dictated more by discipline and early testing than paradigm. 1 week agile sprints seem to be the currently accepted optimum. Another sales pitch. See my previous comment.
  13. Then I cannot rewrite in it classic LabVIEW and it is just an argument of "my dad is bigger than your dad". All my arguments are are already detailed in other threads on here (which you refused to let me reference last time). You think it's great and I think not so much. I outline real implications of using LVPOOP (code size, compile times et. al.) and you outline subjective and unquantifyable measures like "elegance". There is nothing that can't be written in any language using any philosophy. The idea that a problem can only be solved with OOP is false. It boils down to what is the efficacy of achieving the requirements and OOP is rarely, if at all, the answer. After 30 years of hearing the sales pitch, I expect better.
  14. So. Pictures are now code? I would forgive a nooby for that but, come on! FWIW. The Classical LabVIEW equivalent of dynamic dispatch is a case statement and at the top level would probably look identical to the first, if it was contained in a sub VI. Apart from that....very pretty and don;t forget to edit the wire colours for that added clarity Even if the caller has functions with with different terminals?
  15. It's a small and niche sector. You have students being taught by students and the experienced ones have either been moved on to management or are a key person and dead-mans shoes. I, along with you are jaded. When you have seminars and talks consisting of nothing more than someone relaying their bumbling through to an epiphany,you know there is a dearth of experience. There are lots of architects and students and very little in between and those architects only want to do the design, not the coding. The good news for the OP is that because it is a niche market; specific knowledge makes way to "potential" knowledge and an unsaturated market opens more opportunities with a lower bar to entry.
  16. Generally it is LabVIEWs implementation of OOP. The poor compile times, the complexity, the maintainability, the ballooning of the code base and the bugs. Classical LabVIEW is easy and arguably produces more robust code that is easy to understand for engineers rather than CS academics. I often talk about "pure" and "applied" programmers (an analogue to pure and applied mathematics) and Classical LabVIEW is great for applied programmers. OOP is unnecessary complexity in all but the most fringe use cases and it has sucked all the development resource of the language for features that could have benefitted how a vast majority of production code, that does real things, is written. But no. Interfacing with the windows subsytems, that I'm used to never uses objects. It uses functions in dynamic libraries that take data arguments. Opaque pointers to objects is the quickest way to a GPF and in LabVIEW that means taking out the IDE too. It is only when you get to .NET that you forced to start interfacing with objects and I think you know how unimpressed I am with that-it's banned from my projects. If I want to use .NET I would use C#, not LabVIEW-one advantage of being a polyglot, so to speak, is that I'm not limited to one programming language and can choose the best tool for the job.
  17. Without successfully being able to convey the fundamental difference between LabVIEW and, say, C[++] or pascal and the many other procedural languages that OOP was proffered as a solution for. You should perhaps put my comments to one side while you fill out the feature set of the API. I will leave you with this, though. Why isn't a VI an object?
  18. It doesn't support it at all because messaging is a method of breaking dataflow and, if I am feeling generous, it is an equivalent of dataflow with one input to satisfy. The idea that dataflow is "data flowing" - moving from one place to another - is a simplification used to teach the concepts. In fact, it is about "state". What defines a dataflow language is that program execution continues when all inputs are satisfied. Execution state is manhandled in other languages and concepts, by ordering function calls (procedural) or unwinding a call stack (functional) and still proves the main problem of them today. This is why we say that dataflow languages have implicit, rather than explicit, state. Specifically "execution state" is implicit rather than "system state". From this perspective, you have broken dataflow for excellent reasons and are proposing to add back it back in with added complexity so that it "looks" like dataflow again - a problem of your own creation like so many other main-stream, non dataflow,concepts, when applied to LabVIEW. The solution will be a service, actor or whatever you want to call it, that has visibility to global execution state. In classical labview we would just call a VI as non-reentrant from the three loops and allow the implicit nature to take care of ordering and progress of the loops. However. I understand the desire for "completeness" of your API and that's fine. However. Futures are a fix for yet another self inflicted problem of OOP dogma so I don't agree that there are no OOP concepts involved. In LabVIEW, futures are an architectural consideration; not one of implementation.
  19. If you use Yocto then one of the formats for output is a VM image aside from the more usual *.img and *.iso. The NI repository has a Yocto recipe for building NI-RT. After 3 days of fighting with Python and compiler toolchains, I popped my cherry with a Raspberry PI and custom Image (which are included in Yocto and all worked fine). Onwards and upwards to NI-RT. I got to the stage of finding out that everything turned to crud because Yocto isn't backwards compatible (I had 2.2 and NI use 1.8) so rage-quit at that point and posted on here after seeing Daklus comment If you want to really be depressed. Follow the link in the readme.md of meta-nilrt for "Building the Layer" where all your questions will be answered.
  20. Ahh. But I am all the way I need enough to merely debug SQLite, OpenSSH and OpenSSL on some semblance of the NI-RT target. I care not for DAQ, VISA, Vision or anything else. Since this is for internal debug. I don't really have any open source obligations either, unless I distribute the VM. The aforesaid binaries I use have already been researched and I comply with the licences for the end products. The new plethora of licences of the RT-Target will cause me to research each individually for compliance. The risk of overlooking a clause that predatory licence practices rub their hands with glee at, means I would probably not if it is more than some arbitrary number of hours or days. That effort would also have to be replicated every time there is a new release and I would probably end up spending time writing tools and systems to catch changes.The Linux community seem happy with this arrangement (even though Linus Torvalds isn't). I would therefore rather just not use it and have not used it prior to windows 10 for these exact same reasons to the extent that I withdrew Linux support even though most products work with it. Thus I will be prevented from sharing the fruits of that [VM] labour before it has even begun. That is the tragedy of the commons of open source. Of course. I could just install the old-ass version of Yocto and decimate my version 2.2 which took 3 days of pursuing rabbits down very deep holes to set up and get working . That wouldn't help me preventing others from the same pain of getting hold of a VM, though.
  21. Good idea to move. Thanks. Maybe move the other responses here too? I really don't want to pollute the other thread. The problem is that if you do manage to get an image from a device, it probably won't be compatible with a VM. It may be ok to put on another identical platform, though. When you compile Linux stuff, there is a whole tool-chain dedicated to figuring out what system you are compiling on and then setting a shedload of compiler switches and includes (which never work out-of-the-box, in my experience). Cross compiling is even worse. Linux is the worst platform to maintain.. Of course. NI could supply us with one then licencing, obnoxious build tools et. al. wouldn't be a problem and we can get on with the proper programming instead of sorting out 1980s build systems.. As far as licencing is concerned. Linux doesn't make it easy. The onus is on us to identify all the different licences and then comply. Since distros are an amalgam of software written by a multitude of crazy militants; each have different ideas about who can and can't use their software; it is a minefield. The Linux kernel on it's own is a known quantity (GPL) and if they have produced a custom kernel, that is sticky. But who knows what what NI libraries and support binaries from 3rd parties have been added (well. NI do).
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.