Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,837
  • Joined

  • Last visited

  • Days Won

    259

Everything posted by Rolf Kalbermatter

  1. As far as Phar Lap ETS goes you are definitely right. Phar Lap is a dead end and has been in fact since almost 10 years. I'm sure it was one of the driving decisions for NI to push for the NI Linux RT. It's not a coincidence that that got released about one year after Interval Zero declared Phar Lap ETS as EOL. If you want to abandon LabVIEW RT altogether is of course a different question. You can certainly build your own Linux RT kernel, and implement your own automation software platform. I would expect such a project to take only a few man years, not being integrated as much as LabVIEW and being a continuous catch up race with adapting to new hardware since what was the hottest craze last year has already been declared obsolete today. It's a business model you could probably earn some money with, IF you intend to concentrate on doing just that. If it is to support your own automation projects, it will be a continuous struggle to keep it up and running and you will likely always be scratching the corner of barely get it up and running every time, with the plan in the back of your head to make it finally a real released product that you can rely on, but which will never quite happen. IMAQdx for standard Linux is not likely going to happen ever. NI is not really selling image acquisition hardware anymore. They have IMAQdx and IMAQ Vision for their Linux RT targets so that's pretty much all you likely will ever get. There seems to have been a plan for standard Linux hardware drivers both for DAQmx and IMAQdx but with the current situation I'm definitely not holding my breath for it. Also hardware drivers for Linux is a very tricky business, you either open source them completely and get them in the mainstream kernel or you are constantly fighting the Linux Gods and the Distribution Maintainers to keep your drivers running with the latest software and will never be able to make it work even on 50% of the installed Linux distributions out there. Add to that the fact that outside of servers (where automation software with DAQ and IMAQ are extremely seldom of any interests) the Linux market share is pretty small. And the majority of those who do use Linux outside of server environments are more likely to tinker for months with YouTube videos that explain often more than questionable hacks to achieve something, than pay some money for hardware and software other than their PC rig with RGB LEDs and watercooling. For a company like NI that makes this market simply VERY uninteresting as there is simply not much money to be earned but the costs are enormous.
  2. I believe this setting should have been by default switched off and still should be. To many problems with it. If someone wants this feature (and is willing to test his application with it thoroughly on several different computers) they always can switch it on. I don't think it is currently worth the hassle.
  3. No there isn't such a driver (that I would know off and is openly available). There might be in some closed source environment for a particular project (not LabVIEW related, Pharlap ETS was used in various embedded devices such as robot controllers, etc) that used Pharlap ETS in the past but nothing that is official. There are several problems with supporting this: 1) FTDI uses a proprietary protocol and not the official USB-CDC class profile for their devices and they have not documented it publically. You only can get it under NDA. 2) Pharlap ETS is a special system and requires special drivers written in C and you need the Pharlap ETS SDK in order for this. This was a very expensive software development suite. WAS, because Interval Zero discontinued Pharlap ETS ~2012 and since then only sells licenses to existing customers with an active support contract but doesn't accept new customers for it. Now there is an unofficial (from the point of view from FTDI) Linux Open Source driver for the FTDI standard devices (not every chip provides exactly the same FT232 protocol interface but almost all of the chips that support predominantly the RS-232, 422, or 485 modes do have the same interface) and I have in the past spend some time researching that and trying to implement it on top of VISA-USBRAW. But with the advent of Windows 7 and its requirements to use signed drivers even for a pure INF style driver like the VISA-USBRAW driver, this got pretty much useless. This signing problem doesn't exist on the Pharlap ETS system, but development and debugging there is very impractical so when Interval Zero announced the demise of Pharlap ETS, I considered this project dead too. There was both no easy platform to develop the code on as well as no useful target where this still could be helpful. All major OSes support both the USB-CDC as well as USB-FTDI devices pretty much out of the box nowadays. This includes the NI-cRIO that are based on NI Linux RT. The only two beasts that won't are the Pharlap ETS and VxWorks based NI realtime targets, both of them are in legacy state for years and getting sacked this or next year for good. So while it would be theoretically possible to write such a driver on top of NI-VISA, the effort for that is quite considerable and it's low level tinkering for sure. The cost would be enormous for just the few last Mohicans that still want to use it on an old and obsolete Pharlap ETS or VxWorks cRIO controller. As to if there is a device that can convert your USB-FTDI device back into a real RS-232 device, devices based on the FTDI chip VNC1L-1A can implement this, here is an example as a daughter board. You would have to create a carrier with an additional TTL to RS-232 converter and the according DB-9 connector for this or if you are already busy building a PCB anyhow, just integrate the VNC1L-1A chip directly on it. The most "easy" and cheap solution would be to use something like a Raspberry Pi. That can definitely talk to your FTDI device with minor tinkering of some Linux boot script in the worst case. Then you need an application on the Raspi that connects to that virtual COMM port and acts as proxy between this and an RS-232 port (or a TCP/IP socket) on the Raspi that you then can connect to from your LabVIEW Pharlap ETS program.
  4. Correct about the LabVIEW chroot only being an issue on the Linx Targets. But several IPCs will actually work even across chroot jail borders such as good ol TCP/IP (usually my IPC of choice as it works on the same machine, across VMs, on different machines on the same subnet and with some proper setup even across the world). Even shared memory could be made to work across chroot border, but it's complicated, sensitive for correct configuration, and shared memory is difficult to interface to in any case.
  5. Sounds like a masterpiece of engineering. Use a non-standard connector, resp. if they use the DB9 on the other device, connect its pins in a different way than standard, and then use for every new device again a different pinout. Someone must think selling custom cables is the way to earn lots of money! 😀
  6. Would it be a lot to ask, for this library to support C++ style comments? Basically any text after // would simply be ignored until the EOL. I know that JSON strictly speaking doesn't support comments but JSON5 for instance does support those C++ style comments.
  7. According to this link mentioned in the post before yours, you got that wrong. pin 7, 8, 9 are the serial port signals on the DB9 connector. The DB15 connector has them on pin 1, 2, 4!
  8. Note that LabVIEW does not officially support Windows Server OS. I believe it will generally work, but ActiveX is definitely one of the areas I have my suspicions. Windows Server does quite a bit with security policies to lock the computer down for more security. ActiveX may very well be one area that is very much affected by this. Have you talked with MatLab if they fully support Windows Server editions? In any case, ActiveX needs to be installed and that also means modifications to the registry and the installer can choose if he wants to make the component available system wide or just for the current user. Your component may be such a case, or Windows Server may enforce ActiveX registration on a per user base, or maybe even disallow ActiveX installation into the Admin account without some special command line switch. We don't know and we don't generally use Windows Server. They are usually maintained by IT staff and they tend to be very unhappy about anyone wanting to install anything on "their" systems, so it basically never happens.
  9. And how would you suppose should your client, who wants to use this library on a cRIO-9064 (just to name one of the currently still possible options which are Windows 32-bit, Windows 64-bit, Pharlap ETS, VxWorks, Linux 64-bit, MacOS X 64-bit, Linux ARM) recompile it without having access to the key? Sure with asynchronous encryption you don't have to publish your private key but then you are still again in the same boat! If you want to give a client the possibility to recompile your encrypted VIs (in order to not have to create a precompiled VI for each platform and version your clients may want to use), LabVIEW somehow has to be able to access it on their machine. And if LabVIEW can, someone with enough determination can too. Sure enough, nowadays LabVIEW could be turned into LabVIEW 365 and you could host your code on your own compile server and only sell a VI referrer to your clients. Anytime a client wants to recompile your VI, he has to contact your compile server, furnish his license number and the desired compile target and everything is easy peasy, unless of course your compile server has a blackout, your internet provider has a service interruption, or you go out of business. All very "nice advantages" of software as a service, rather than a real physical copy.
  10. This very much depends on the used font. Not all fonts specify glyphs for all these attributes. If it doesn't, the attribute request is simply ignored and the font is drawn in the basic form. LabVIEW doesn't draw these texts, it just passes the request to Windows GDI (or Unix XWindows, or MacOS Quartz) and tells it to do its work).
  11. Consider the diagram password equivalent to your door lock. Does it prevent a burglar to enter your home if he really absolutely has set his mind on doing so? Of course not! Is it a clear indication to the normal law abiding citizen to not enter? You bet! There is no simple way to protect a diagram that the compiler needs to be able to read in order to recompile the program (for a different version, platform or whatever) without having a fairly easy way to also peek into it for the person truly wanting to. In fact there are many ways to circumvent that. You could patch the executable to skip the password check when trying to open a diagram or you can locate the password hash and reverse its algorithme to get back at the password. The only problem is that this is an MD5 hash. So it is not a simple reversible algorithme, but MD5 is not a secure hash anymore, since with enough CPU power you can find a string (it does not necessarily have to be the original password since multiple arbitrary sized character sequences will eventually map to one single hash code) that results in the specific hash. It may take a few days and will certainly contribute to the global warming 😀, but it can be absolutely done. Chances are that that CPU power may be more productive in terms of money when directed at mining of cryptocurrency, even with the current dive in value. Another approach is to simply replace the password hash in the file with the hash for the empty password (which means unprotected). It's not as simple as replacing 16 bytes of data in a file with a standard byte sequence, since that hash is also computed over some of the binary data of the VI file, but it's also not impossible. Why they didn't make it more secure? The only way to do that would be to truly encrypt it but then you also need the password to be able to recompile the code. But then you can just as well remove the diagram when distributing the VIs, as that diagram has not real additional value anymore, except that you as password owner don't have to maintain two versions, one without diagram to give to your user, and one with it for your maintenance and further development. You would end up with the problem to have to distribute your VIs for every LabVIEW platform you want to support or hand out the password to your users in order for them to be able to compile it for a different platform or version. Basically to come back to our door lock: The security you are expecting would be pretty much similar to replacing all windows and doors in your house with a concrete wall and only leave one single door that is heavily steel enforced, with quadruple high security locks and self shoot installation in case of entering the wrong code more than 3 times. Very secure but absolutely not comfortable or practical and potentially lethal to your family members and friends.
  12. As far as if NI could do that legally, yes they could. If they want to? Why should they? Are you releasing all your source code as Open Source? And this is really meant seriously. Propose a meaningful business case why NI should release that source code if you were a manager at NI! And don't forget all the technical support questions of noobies trying to peek in that code, thinking they can fix something and then boom, everything goes haywire. There is no business case in the current NI software model for this. Unless NI decides to go .Net Core with their software, which I don't quite see happen yet. Open Sourcing components that are not just nice to have libraries that you can more or less throw out in the wild according to the motto: Take it as is, improve it on your own or leave it! is only causing additional work and issues without solving anything.
  13. If you really want to spend a lot of time and effort into this, the more promising way would be to simply start developing an Advanced Analysis Library replacement directly around the Intel Math Kernel Library. That DLL interface you are talking about is mostly a historical burden carried over from when NI was still developing their own Advanced Analysis Library. Around 7.x days they decided that they could never compete with the girls and guys who knew probably best on this planet how to tickle a few percentage more of performance out of Intel CPUs, because they worked at the company who had developed those CPUs and so NI replaced the in-house developed AAL Math Library with the Intel Math Kernel Library. The DLL interface was left intact so there was little to be changed on the VIs that made up the AAL, but internally most functions in that DLL call more or less directly into the Intel MKL. Spending lots of time for the documentation of that proxy wrapper DLL is not very useful. There are very few secrets in there that you couldn't learn by reading the official documentation for the Intel MKL.
  14. No, and there likely never will be! NI has a very strong tendency to never ever talk about what might or might not be, unless the actual release of something is pretty much announced officially. In the old days with regional field sales offices you could sometimes get some more details during a private business lunch with the field sales office manager (usually with the request to please not share it with anyone else, since it wasn't yet official and in fact might never really materialize in that exact way). The only thing we know is that the LabVIEW source code was in some escrow and supposedly still likely is, for the case NI might end up with its belly up. How much that really means, other than to sooth managers who were worried about choosing for a single vendor software solution, I can't really say. It certainly doesn't mean that the LabVIEW source code automatically will be Open Source if NI folds up or decides to shut down LabVIEW development. Selling the rights to it to some other party who declares its intention to somehow continue its development, is almost certainly enough to fulfill all obligations of such an escrow agreement. As Brian (Hooovahh) already mentioned, open sourcing LabVIEW is not an easy task. Aside from old legacy "crimes" committed in the early days of LabVIEW development (and often technically unavoidable since there simply weren't better technologies available, LabVIEW pushing the economically available hardware to its limits, and/or LabVIEW's source code requirements pushing the limits of what C compilers could do back then), there are also things that are likely simply not open sourceable for several legal reason. Reworking the source code to be able to publish it as open source would be costing a significant effort. And who is going to foot that bill, after NI is gone or decided to stop LabVIEW development?? For instance, I'm pretty sure that almost nothing of the FPGA system would be legally open sourceable, since it is probably encumbered with several NDAs between NI and Xilinx. And there are likely several other parts that under the hood are limited in similar ways. Even just the effort to investigate what could and what couldn't be open sourced is going to cost some serious legal and engineering man power and with that also real money. Then someone has to remove anything that was identified as being impossible or undesirable to be open sourced and clean up the mess after that. This in itself would be likely a serious project. And then what? Who is going to maintain a project like that? .Net Core only really is successful because Microsoft puts its might behind it. Without Microsoft's active support role it would be still Mono, hopelessly trying to catch up with whatever new thing Microsoft comes up with.
  15. It very much depends how much back you dare to look in that graph! 😀 Since it is a NASDAQ share you should rather go to the source, Luke! That steep climb around 2017 is actually after they started implementing those changes. The decline you see is mostly happening during Covid but as a trend not quite very significant yet. That all said, the current trend to measure everything in share price is a hype that is going to bring us the next big crash in not to much time. My guess is that once most people have crawled out of their covid imposed isolation in their private hole, they will look at the financial markets and wonder where the actual real world value is, that some of the hyped companies would need to have, to make their share price expectations even remotely true. And then the big awakening happens when the first people start to yell "but he isn't wearing any clothes".
  16. I don't have these numbers. What I know is that a few years ago, NI noticed that their sales figures were starting to flatten. For a company used to have high two digit growth numbers year after year this is of course a very alarming signal. 😀 They hired some consultants who came to the conclusion that the traditional T&M market NI was operating in simply didn't have left much more air in it to continue to support the growth NI had been getting used to. And strategic decisions were made behind the scene. Not to much of that has been openly communicated yet, but the effects have been quite obvious in the past few years. NI has deprioritized the PC based test and measurement hardware, completely abandoned any motion ambitions, marginalized their vision ambitions and put much of the traditional DAQ hardware into legacy mode. And their whole sales organization has been completely revamped. No field sales offices anymore, highly centralized technical support by typical call center style semi-outsourced places. Behind the scene they do large scale business and have increased their sales further since that alarming consultancy report. So somehow it seems to work. One reason they may not have been very public about these changes is probably that it did change their old model of relying very heavily on external Alliance Members for actual application support of all their customers. In a few strategic industries they now have moved in to deliver full turn key systems themselves directly to the customer. For the typical Alliance Member that probably doesn't directly mean loss of business, since the customers and projects NI serves in this way are accounts that only very few Alliance Members would dare to even consider to look at, as the volume of the business transaction is simply very huge. However it certainly has other effects for all Alliance Members. The contact with NI has been getting very indirect with all the regional sales offices having vanished and the efforts from NI to compensate that with other means haven't gotten much further than marketing presentations with lots of nice talk up to this point. As to LabVIEW: It's demise has been promised since it was first presented. First because it was clearly just a toy that no engineer ever could take seriously, then because NI didn't push for an international standard to formalize the LabVIEW G language, later because they didn't want to open source it. I'm not sure any of these things would have made a significant difference in either the positive or negative direction. It's clear that LabVIEW is nowadays a small division inside NI that may or may not find enough funding by the powers to be, to maintain a steady development. If you look at the software track record of NI it doesn't look to well. They bought many companies and products such as HIQ, Lookout with Georgetown Systems, DasyLab, and quite a few more, and none of them really exists nowaday. Of course a lot of them such as DasyLab were in fact simply buying out competition and it was clear from the start that this product has not a very bright future in the NI stall. Lookout was marketed for quite some time, a lot of its technology integrated into LabVIEW (LabVIEW DSC was directly build on top of much of Lookouts low level technology and the basis for the low level protocols used in Shared Variables and similar things). LabWindows/CVI is lingering a semi-stasis existence for several years already. It's development can't quite keep pace with the main contenders in the market, GCC and Visual Studio. In view of this, acquisitions like Digilent and MCC may look a bit surprising. On the other hand it might be a path to something like a HP/Agilent/Keysight diversification. NI itself moves into the big turn key semiconductor testing business (and EV testing market), one of these other companies takes over the PC based LabVIEW, DAQ, and instrument control business.
  17. That's always debatable. From a technical point of view I fully agree with you. LabVIEW is a very interesting tool that could do many more things if it had been managed differently (and also a few less than it does nowadays). For instance I very much doubt it would have ever gotten to the technical level it is nowadays if a non-profit organization had been behind LabVIEW. The community for LabVIEW is simply to diverse. The few highly technical skilled people in LabVIEW world with a very strong software engineering background, who could drive development of such a project in an Open Source model, do not reach critical mass to sustain its continued improvement. On the other end of the scale you have a huge group who want to use LabVIEW because there is "no programming involved", to parodize some NI marketing speak a bit. Maybe just maybe, an organization like CERN could have stepped in just as what happened with KiCAD. KiCAD lingered for a long time as a geeky Open Source project with great people working on it in the typical chaotic Open Source way. Only when an organization like CERN put its might behind it, did the project slowly move into a direction where it could actually start to compete on features and stability with other packages like Eagle PCB. It also brought in some focus. CERN is (or at least has been) quite a big user of LabVIEW so it could have happened. CAD development moved in the meantime too, and while KiCAD nowadays beats every CAD package that was out there 20 years ago hands down, the current commercial CAD platforms offer a level of integration and highly specialized engineering tools, that require a lot of manual work when tried in KiCAD. Still, you can design very complex PCBs in KiCAD nowadays that would have been simply impossible to do in any CAD package 20 years ago, no matter how much money you could have thrown at it back then. But LabVIEW almost certainly would not cross compile to FPGA nowadays, and there would be no cRIO hardware and similar things to which it almost seamlessly compiles to, if it had not been for NI. On the other hand, LabVIEW might actually be a common teaching course at schools, much like Python is nowadays on the ubiquitous Arduino hardware, if NI had decided that they want to embrace LabVIEW being a truly open platform. The reality is, that we do live in a capitalistic system, and that the yearly earnings is one of the highest valued indicators for success or failure of every product and company. Could LabVIEW have been and being managed differently? Of course! Could it have survived and sustained a steady and successful development that way? Maybe!
  18. There is a standard digital signal available in the FPGA environment that allows resetting the device. You can assert this pin from your FPGA program. So one way would be to add to your FPGA program a small loop that polls the external digital signal (preferably with some filtering to avoid spurious resets) and then feed that signal to the FPGA Reset boolean signal.
  19. NI didn't say they would be porting NXG features to 2021, but to future versions of LabVIEW. Technically such a promise would have been unfulfillable, since at the time the NXG demise was announced, LabVIEW 2021 was basically in a state where anything that was to be included in 2021 had to be more or less fully finished and tested. A release of a product like LabVIEW is not like your typical LabVIEW project where you might make last minute changes to the program while testing your application at the customer side. For a software package like LabVIEW, there is a complete code freeze except for breaking bug fixes, then there is a testing, packaging and testing again cycle for the Beta Release, which typically takes a month or two alone, then the Beta phase of about 3 to 4 months and finally the release. So about 6 months before the projected release date, anything that is not considered ready for prime time is simply not included in the product, or sometimes hidden behind an undocumented ini file setting. Considering that, the expectation to see any significant NXG features in LabVIEW 2021 was simply blue eyed and irrational. I agree with you that LabVIEW is a unique programming environment that has some features that are simply unmatched by anything else. And there are areas where its age is clearly showing such as lack of proper Unicode support, and related to that the lack of support for long path names. Personally I feel like I could tackle the lower level part of full Unicode support in LabVIEW including full Unicode path support quite easily if I was part of the development team, but have to admit that the higher level integration into front panels and various interfaces is a very daunting task that I have no idea how I would solve it. Still, reworking the lower level string and path management in LabVIEW to fully support Unicode would be a first and fundamental step to allow the other task of making this available to the UI in a later stage. This low level manager can exist in LabVIEW even if the UI and higher level parts don't yet make use of it. The opposite is not possible. That is just one of many things that need some serious investment to make the whole LabVIEW platform again viable for further development into the future. This example also shows that some of the work needed to port NXG features back to LabVIEW require first some significant effort that will not immediately be visible in a new LabVIEW version. While such a change as described above is definitely possible to do within a few months, the whole task of making whole LabVIEW fully Unicode capable without breaking fundamental backwards compatibility, is definitely something that will take more than one LabVIEW version to eventually fully materialize. There are a few lower hanging fruits that can help prepare for that and should have been done years ago already but were discarded as "being already fixed in NXG" but the full functionality just for full Unicode support in LabVIEW is going to be a herculean task to pull off, without going the path of NXG to reinvent LabVIEW from scratch (which eventually proved to be an unreachable feat). My personal feelings about the future of LabVIEW are mixed. Not so much because LabVIEW couldn't have a future but because of the path NI as a company is heading. They have been changing over the last few years considerably, from an engineering driven to a management driven company. While in the past, engineers had some real say in what NI was going to do, nowadays it's mostly managers who see Excel charts, sale numbers and the stock market exchange as the main decision making thing for NI. Anything else has to be subordinated to the bigger picture of a guaranteed minimum yearly growth percentage and stock price. The traditional Test & Measurement market NI has served for much of its existence is not able to support those growth numbers anymore. So they are making heavy inroads into different markets and seem to consider the traditional T&M market by now just as a legacy rather than a significant contributor to their bottom line.
  20. Well, ultimately everything LabVIEW does is written in C(++). Some (a very small part of it) is exported to be accessible from external code. Most goes a very different and more direct way to calling the actual C code functions. Functions don't need to be exported from LabVIEW in order to be available for build in nodes to be called. That can all happen much more directly than through a (platform depending) export table.
  21. No! Your guesswork got you in many ways wrong. AZ and DS memory spaces is an old Mac OS Classic programming distinction. AZ memory could be automatically relocated by the OS/kernel unless it was explicitly locked by the user space application. DS memory stays always at a fixed memory location. It also meant that when you try to access AZ memory and didn't lock it first with AZLock() you could badly crash if the OS decided that it needed that space and moved the memory (possibly into a cache file) during the time you tried to access that memory block. With modern virtualized memory manager hardware support in modern OSes such as Windows NT and MacOS X, this distinction got superfluous. In user space memory nowadays always behaves as appearing at fixed virtual memory address location. Where it is actually stored in real memory (or a disk cache file) is all handled transparently by the OS virtual memory manager supported by a powerful hardware memory management unit directly integrated in the CPU. As soon as NI sacked support for MacOS Classic in LabVIEW, they also removed consequently the AZ memory manager completely. In order to support old legacy C code and CINs using the AZ memory manager functions explicitly, the exports still exist but simply are linked to the corresponding DS counterparts where existing, and for those that have no DS counterpart like the Lock and Unlock functions they simply call an empty function that does nothing. The actor framework as far as I know does not use explicit C code for anything but simply builds on the existing LabVIEW OOP technology so I'm not quite sure what you refer to here. The old LVOOP package used a technique similar to what was used in the old CINs for the queues, semaphores, notifiers and rendevous functions, to store the "class" data for a specific object instance in a sort registry to implement some sort of class support before the native LabVIEW OOP was introduced, but that wasn't really using any build in queues or similar functionality internally (as that build in functionality didn't fully exist at that time either). As to using a LabVIEW created DLL running in a seperate application instance, this is actually more complicated than you might guess. That is one aspect that I find makes the use of LabVIEW created DLLs for use in LabVIEW itself extra maintenance intense. If the DLL was created in the same LabVIEW version (or compiled to be executable in a newer runtime version than build since LabVIEW 2017), LabVIEW will load the DLL into the current application instance. If the versions don't match and the option to execute the DLL in a newer runtime version wasn't enabled when building that DLL, LabVIEW will startup the according LabVIEW runtime system, load the DLL into it and setup interprocess marshalling to execute every call to this DLL through it. Needless to say that this has some implications. Marshalling calls across process boundaries costs time and resources so it is less performant than when it is all happening in process. And as you noted, the application instance separation will prevent access of named resources such as queues between the two systems. And this possible marshalling is so transparent to a normal user that he may never ever guess why his queue he created through the DLL call doesn't share data with another queue he "obtained" in the native LabVIEW code. Logically they are totally different entities but the fact if they are or not may depend on subtle differences of what LabVIEW version was used to create your DLL and what LabVIEW version you call it from. As to handles I'm not quite sure what PDF you refer to. The whole handle concept originates from the Mac OS Classic days. And MacOS had handles that could be directly created by external code through calls to the MacOS Classic Toolbox. LabVIEW had special primitives that could refer to such handles so that you did not need to copy them always between the external handle and a LabVIEW handle. That support was however very limited. You basically had a Peek and Poke function only with a handle input and an offset in addition to the value. This functionality never made sense on non Mac OS Classic platforms although I believe the primitives still existed there but where hidden. No need to confuse the user with an obscure feature that was totally useless on the platform in question. Even on Mac OS it was hardly ever used except maybe for a few hacky NI interfaces themselves. Almost all of this information has mostly just archeological value nowadays. I'm explaining it here to save you from going down some seemingly interesting path that has absolutely no merits nowadays.
  22. I understand and admire your reverse engineering effort 😀. My answer was partly directed to the OPs claim that the Occurrence API was documented. It's not (at least officially although the accidental leak from early LabVIEW days could be counted as semi official documentation). You're right that those functions you mention as lacking from those headers didn't even exist back in those days. They were added in the meantime in later versions. The additional 50MB of code in LabVIEW.exe aren't just useless garbage. 😀 (And only a fraction of what LabVIEW gained in weight over those 30 years since all the manager core is now located in external DLLs). That also points out another danger of using such APIs. They are not fixed unless officially documented. While NI generally didn't just go and change existing functions for the fun of it, they have only gone to extreme lengths to avoid doing that for functions that are officially documented. Any other function is considered fair game to be changed in a later LabVIEW version if the change is technically necessary and not doing so would require a big extra effort. This is also the main reason they haven't done any documentation of new functions (with very few exceptions such as PostLVUserEvent() ) since the initial days. Once officially documented, that API has to be considered cast in stone and any change to its prototype or even semantical behaviour is something that is basically considered impossible unless absolutely unavoidable for some reason.
  23. It means you COULD theoretically interact with Queues from your own C code. In reality the function name alone is pretty useless. And there is NO publically available documentation for these functions. Theoretically if you are an important enough account for NI (requires definitely 7 digits or more yearly sales in US$ for NI) are willing to sign over your mother, wife and children in an NDA document in case you breach it, you may be able to get the headers for these APIs. In practice the only people with access to that information do work in the LabVIEW development team and would likely get into some serious problems if they gave that to someone else. If you really need it, there is a workaround though that comes with much less financial and legal trouble but can get a bit of a maintenance problem if you intend to use it in multiple LabVIEW versions and platforms: Create a LabVIEW library that wraps the Queue functions you are interested in into some VIs, create a DLL from those VIs and export them as function, then call those functions from this DLL from within your C code.
  24. No, it's not odd! Occurrences exist in LabVIEW since at least version 2.5. And back then NI sort of made a slip by distributing the full internal extcode.h file that exposed pretty much every single function that LabVIEW exported and could be accessed from a CIN (the only way to call external code back then). They fixed that in subsequent releases of 2.5.x (which was a pre release version of LabVIEW for Windows 3.0). Much of what was declared in that header was fairly useless anyways, either because it only was usable with other parts of LabVIEW that were not accessible from external code, or because the functionality was to complex that it could be inferred from just the header alone. NI never officially documented the Occurrence functions but someone with access to those OOOOOOLD headers can simply take them from there, which this poster probably did. There is one caveat though: While those functions probably remained the same, the documentation posted is most likely from those 2.5 headers and might be not entirely accurate anymore with the current prototype of the functions as LabVIEW exports them nowadays. The Queues, Semaphores and Notifiers that came out with LabVIEW 4 or 5, were indeed CIN based. The CIN implemented the entire functionality using internally Occurrences for the signal handling to allow low cost CPU waits. Around LabVIEW 6 or 7 the full CIN code for the Queues and Notifiers was revamped and fully moved into the LabVIEW kernel itself and integrated as built in nodes. Semaphores and Rendezvous were reimplemented in LabVIEW code using internally Queues. Since there are no headers floating around on the internet from LabVIEW 7 or later dates, the declarations for the Queues and Notifiers are nowhere to be found although with enough time at hand and ignorance of the LabVIEW license terms, one could in fact reverse engineer them. The problem with that is that you can never quite be sure that you got it all right even if it doesn't crash at the moment. That makes this a pretty useless exercise for real use of these APIs and for just some hobby usage, the effort is both too high and requires way to specialistic knowledge. Only crazy nerds disassemble code nowadays and only super crazy nerds do that with an executable of the size of LabVIEW. 😀
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.