Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,881
  • Joined

  • Last visited

  • Days Won

    265

Everything posted by Rolf Kalbermatter

  1. What doesn't work with the function: AJ_NETSDK_IPC_PTZControl() on page 21/22? Or are you not using the SDK functions to retrieve the RTSP stream but some other ready made interface for LabVIEW? Meaning you have no idea how to interface to a DLL? A few points to consider: 1) The camera may not like a secondary connection, either through the SDK or through generic TCP/IP while it is busy streaming image data to the VLC or whatever interface. 2) Trying to reverse engineer the TCP/IP binary stream protocol is likely going to be cumbersome and difficult to realize as it is usually proprietary. The SDK interface is simple enough to use, except if you lack any and all understanding about C programming. It's not a CIN node either that you will need to configure but a CLN (Call Library Node). CINs are not only legacy technology but on most modern LabVIEW versions simply not supported anymore. An interesting problem, but none I can help you as I do not have that hardware, and I would expect it to be a bit cumbersome considering above 2 points.
  2. Well, there are two aspects. The first is the technical one from hackers diving into the software and unhiding things that NI felt were not ready for prime time, to complicated for simple users, or possibly also to powerful. The main reason definitely always is however: if we release that, we have to spend a lot more effort to make it a finished feature (a feature for internal use where you can tell your users: "sorry that was not meant to be used in the way you just tried") is maybe 10 - 20% of development time than the finished feature for public use. There is also support required. That costs money in terms of substantial extra development, end user quality documentation (a simple notepad file doesn't cut it), maintenance and fixing things if something does not match the documented behaviour. And yes I'm aware they don't always fix bugs immediately (or ever) but the premise is, that releasing a feature causes a lot of additional costs and obligations, if you want to or not. The other aspect is, if someone who is an active partner and has active contacts with various people at NI, he is infinitely more likely to be able to influence decisions at NI than the greatest hacker doing his thing in his attic and never talking with anyone from NI. In that sense it is very likely that Jim having talked with a few people at NI has done a lot more to make NI release this feature eventually, than 20 hackers throwing every single "secret" about this feature on the street. In that sense the term "forcing NI's hands" is maybe a bit inaccurate. He didn't force them, but led them to see the light! Not out of pure selfless love, but to be able to officially use that feature for himself. The according Right-Click framework was a proof of concept to see how this feature can be used and mainly an example to other users how it can be used, and indeed once it worked it had fulfilled its purpose. That it was not maintained afterwards is not specifically JKI's fault. It is open source, so anyone could have picked up the baton, if they felt it was so valuable for them. The problem with many libraries is actually, if they are not open source and free, many complain about that, if it is open source and/or free, they still expect full support for it! In that sense I have seen a nice little remark recently:
  3. NI was quite invested in EtherCAT circa 2012 to 2016 but then lost its interest in anything not related to big turnkey testing systems. The Industrial Communication tools were part of that together with Motion, Vision and pretty much anything else except DAQ and PXI. Basically EtherCAT after about LabVIEW 2017 is more of a "Take it as it is! If it works for you, great! If it doesn't, please don't bother us!" Even in LabVIEW 2017 it was still far from a Plug and Play experience. I worked on a project at that time, trying to interface a Industrial Controller IC-3173 to several different EtherCAT hardware. It was a long and tedious process and back then we could actually talk to support people who tried to help but often were stumped just as much as we, why something didn't work. There were many subtleties and it was not always NIs fault when something didn't work, but it was very difficult to get a deeper look into things as NI tends to hide a lot behind simple looking configurations. At this point I have very strong doubts that anyone at NI even knows how any of these Industrial Communication Products work. They still sell a few things that can work if you are lucky, but if it doesn't work, they can't support it in any way, since there simply is nobody anymore who has worked on these products in many years. The good news is that NI definitely has been getting a lot more proactive on a lot of subjects in the last year. Their efforts to get the community involved again, still need to be fully realized but the communicated intentions are strong and fairly consistent, so I dare to have hope. From some communications I had with them, it seems that they actually are planning to be more concentrated on certain core areas and actively look for 3rd parties in the market to provide alternative solutions for the products they don't consider a core technology. Industrial Communication interfaces seem one such area, where they would seem happy to promote an alternative rather than maintaining their own solution, but they might pick up on some of them, as they are not all as easy to support for 3rd party providers.
  4. Not likely. I think the Administrators always resisted such requests unless there was a real legal matter involved. But LavaG had several nervous breakdowns over the years, either because a harddisk crashed or forum software somehow got in a fit. It was always restored as well as possible, but at at least one of those incidents a lot got lost. Some of that was consequently restored from archives other people had maintained from their push notifications from this website, but quite a bit got lost then. You can still see some old posts where the whole text is underscored and links for as far as they are present point into nirvana. These are supposedly not well restored posts and by now of at best historical value, so hard to justify to try to clean up.
  5. Aren't shortcut menus invoked in the UI thread and maybe will instantiate any menu handling VIs in the LVDialogContext? The same that is used when you place VIs in the Tools menu (or maybe a similar other specific context). Did you try a This App Reference constant?
  6. The idea about NXG in terms of how "bad" the old code base was, always struck me a bit like "throwing out the baby with the bathwater". And while I wasn't involved in any of the user sessions about NXG myself, from the things I heard, I got the impression that they were mainly trying to look for an echo chamber instead of real constructive critique. (Probably one reason I did not get contacted as I never held back about not being a cheerleader). 😁
  7. Thanks for your efforts Michael. It is appreciated a lot. I just checked the logs and am surprised how many user creation log entries are there. At this rate we should have a huge amount of traffic here. I only wonder if this is an ongoing effort from hackers trying to get a hold in the wiki (or maybe indirectly also LavaG) to massively spam it at some point, or if most of those entries are actually genuine. I hope of course for the latter and looking at the names it could be for some. There are of course obvious exceptions like https://labviewwiki.org/wiki/User:Battery_trade-in_financial_gains
  8. Well, it would seem to me that you are a newbie with LabVIEW and first should start to learn the basics before trying to dig into specific projects. Have a look at the resources mentioned on the top of this page. https://forums.ni.com/t5/LabVIEW/bd-p/170 Go through the NI Learning Center links first then explore from there. You have to learn to walk before you can try to run or do a marathon.
  9. You will have to provide some example. For string named nodes, the NodeID normally simply is "ns={x};s={YourNodeName}" OPC UA node ID is *not*, by itself, a string. It is primarily a structure, containing the namespace (URI, index or both), and the identifier. OPC UA Node Id *has* a string representation, but the string representation is not the Node ID itself. The very same OPC UA node ID can be represented by multiple strings. You should not assume that two strings that appear different represent different node IDs. For example, "ns=1;s=SomeNode", and "ns=1;SomeNode" are strings that represent precisely the same node ID, because the "s=" prefix (for string identifiers) is optional. Or, "ns=0;s=Objects" may be the same as simply "s=Objects", because the default parsing context assumes the namespace index zero (when missing from the string). The only thing you need to somehow know is what namespace index your variable is in and then you can easily construct a nodeID from a variable name yourself. If you want to do it programmatically you would have to enumerate the namespace of the server with the Browse functionality and search for your specific variable name, but note that that may not always lead to a correct result. It is very possible that a server has multiple variables with the same name but under different namespaces. So programmatic resolution may end up giving you the wrong NodeID, as it can not know which it should use if it only has the name to go by.
  10. Your mixing and matching two very different things. The 2GB RAM is the physical RAM that the CPU and the Linux OS can use. But the RAM in the FPGA that is used for Lookup Tables is a very different sort of RAM and much more limited. The 9043 uses the Xilinx Kintex-7 7K160T FPGA chip. This chip has 202,800 Flop-Flops, 101,400 6 input LUTs, 600 DSP slices and 11,700 kbits of block RAM or BRAM, which is what can be used for the LUTs. If you really need these huge LUTs, you'll have to implement them in the realtime part of the application program, not in the FPGA part.
  11. You can't automatically. There could be the same name in several namespaces. You will have to browse the various namespaces and search for that specific name. That can be done with the BrowseForward function programmatically. Or if you know in which namespace they all are, you can reconstruct the NodeID string of course yourself, something like ns=1;s=your_string_name or if it is an uri namespace: nsu=https://someserver/resource ;s=your_string_name
  12. How is that name created? Variables in OPC-UA are really only known by their node ID anyways. This can be a string node ID that gives you a name, but that is entirely depending on the server configuration as to what that string would be if any. There is no way that you can have some random string name and let the server answer you with what node ID that could be, since a node is simply known by its node ID and nothing else. So the real question is how did that string name in your spreadsheet file get generated? Is it the string part of a string type node ID with removed namespace part?
  13. Ahh, ok, that I would wholeheartedly agree to. It's probably mostly going through the UI thread and possibly a lot through the root loop even. It touches many global resources and that's the most easy way to protect things. Adding mutexes over the whole place makes things nasty fast and before you know it you are in the land of mutual exclusion if you are not VERY careful.
  14. Faster is a bit debatable. 😁 The scripting method took me less than half an hour to find out that it doesn't return what I want. Should have done that from the start before getting into a debate with you about the merits of doing that. 😴 The file is of course an option, except that there are a number of pitfalls. Some newer versions of LabVIEW seem to Deflate many of the resource streams and giving them a slightly different name. Not a huge problem but still a bit of a hassle. And the structure of the actual resources is not always quite casted in stone. NI did change some of that over the course of time. But pylabview definitely has most of the information needed for doing that. 😁 One advantage, it would not be necessary to do that anywhere else in the OGB library. The original file structure should still be guaranteed to be intact at the time the file has been copied to its new location! But it goes clearly beyond the SuperSecretPrivateSpecialStuff, that OpenG and VIPM have dared to endeavor into so far. Considering that I'm currently trying to figure out how to do LabVIEW Variants from the C side of things, as properly as possibly without serious hacks, it's however not something that would scare me much. 😁
  15. Let's agree to disagree. 😀 However, some testing seems to indicate that it is not possible with the publicly available functions to get the real wildcard library name at all. It's there in the VI file and the Call Library Node can display it, but trying to read it through scripting will always return the fully qualified name as it was resolved for the current platform and current bitness. No joy in trying to get the wildcard name. That's rather frustrating!
  16. It depends what your aim is. If your aim is to get something working most of the time with as little modification as possible to the OpenG OGB library, then yes you want to guesstimate the most likely wildcard name, with some exception handling for specially known DLL names during writing back the Linker Info. If your aim is to use the information that LabVIEW is using when it loaded the VIs into memory, you want to get the actual Call Library Name. And you want to get that as early as possible, preferably before the OGB tool resaved and post- and/or prefixed the name of all the VIs to a new location. All these things have the potential to overwrite the actual Call Library Node library name with the new name and destroy the wildcard name. So while you could try to query the library name in the Write Linker Info step, it may be to late to retrieve that information correctly. That means modifying the OGB library in two places. Is that really that much more bad than modifying it in one place with an only most of the time correct name?
  17. It does. The Call Library Node explicitly has to be configured as user32.dll or maybe user32.*. Otherwise, if the developer enters user*.*, it won't work properly when loading the VI in 64-bit LabVIEW.
  18. It's pretty simple. You want to get the real information from the Call Library Node, not some massaged and assumed file name down the path. Assume starts with ass and that is where it sooner or later bites you! 😀 The Call Library Node knows what the library name is that the developer configured. And that is the name that LabVIEW uses when you do your unit tests. So you want to make sure to use that name and not some guesstimated one. The Read Linker Info unfortunately only returns the recompiled path and file name after the whole VI hierarchy has been loaded and relinked, possibly with relocating VIs and DLLs from anywhere in the search path if the original path could not be found. That is as far as the path goes very useful, but it looses the wildcard name for shared libraries. The delimited name would have been a prime candidate to use to return that information. This name exists to return the fully names spaced name of VIs in classes and libraries. It serves no useful purpose for shared libraries though and could have been used to return the original Call Library Node name. Unfortunately it wasn't. So my idea is to patch that up in the Linker Info on reading that info and later reuse it to update the linker info when writing it back to the relocated and relinked VI hierarchy.
  19. Yes, my concern is this. It's ok for your own solution, where you know what shared libraries your projects use and how such a fix might have unwanted effects. In the worst case you just shoot in your own foot. 😀 But it is not a fix that could ever be incorporated in VIPM proper as there is not only no way to know for what package builds it will be used, but an almost 100% chance that an affected user will simply have no idea why things go wrong, and how to fix it. If it is for your own private VIPM installation, go ahead, do whatever you like. It's your system after all. 😀
  20. It's my intention to throw the VI, once it seems to work for me, over the fence into JKI's garden and see what they will do with it. 😀 More seriously I plan to inform them of the problem and my solutions, through whatever support channels I can find. It does feel a bit like a black hole though. The reason for working like it does, is most likely that it is simply how the Read Linker Info in LabVIEW works. They use that and the information it returns and that is not including the original wildcard name. Also most of that is the original OpenG package builder library with some selected fixes since, for newer LabVIEW features. I also don't get the feeling that anyone really dug into those VIs very much to improve them beyond some bug fixes and support for new LabVIEW features since the library was originally published as OpenG library (mainly developed by Konstantin Shifershteyn back then, with some modifications by Jim Kring and Ton Plomp until around 2008).
  21. We are also developers, not just whiners! 😀 Honestly I did try in the past to get some feedback. It was more specific about supporting more fine grained target selections than just Windows, Linux and Mac in the package builder, specifically to also support 32-bit or 64-bit and not just for the entire package, as VIPM eventually supported, but rather for individual file groups in the package. Never received any reaction on that. Maybe it was because I wanted to use it with my own OpenG Builder, I just needed the according support in the spec file for the VIPM package installer to properly honor things. Didn't feel like distributing my own version of the package installer too. That format allows specifying the supported targets, LabVIEW version and type on a per file group base. VIPM package builder is a bit more limited for the purpose of more easy package configuration when building packages. I'm using that spec file feature through my own Package Builder to build a package that contains all shared libraries for all supported platforms and then install the correct shared libraries for Linux or Windows in the OpenG ZIP (and Lua for LabVIEW) package, depending on what platform things are installed. Unfortunately the specification for the platform and LabVIEW version does not seem to support 32-bit and 64-bit distinctions. So I have to install both and then either fixup things to the common name in the Post Install, or use the filename*.* library name scheme.
  22. Actually I'm more inclined to hack things here: I don't feel like traversing the entire linker info array recursively if that is done anyhow in the next VI in the execution flow! But the current implementation is a bit naive, did some refactoring on that too.
  23. The most proper solution will be to go through all the files retrieved through the Read Linker Info function in the Generate Resource Copy Info.vi, find all shared libraries and from there their callers, enumerate for Call Library Nodes in the diagram and retrieve the library name and patch up the "delimited name" in the linker info for the shared library with the file name portion of that library name. In the Copy Resource Files and Relink VIs.vi, simply replace the file name portion of the path with the patched up "delimited name" value! The finding of the callers is already done in the Generate Resource Copy Info.vi, so we "just" have to go through the Resources Copy Info array and get the shared library entries and then check their Referrers. This assumes that all the library paths are properly wildcard formatted prior to starting the build process as it will maintain the original library name in the source VIs. Possible gotcha's still needed to find out and resolve: - Is the library name returned by the scripting functions at this point still the wildcard name or already patched with the resolved name while loading the VI? - Are all the scripting functions to enumerate the Call Library nodes and retrieve the library name also available in the runtime or only in the full development runtime? Although the Build Application.vi function is most likely executed over VI server in the target LabVIEW version and not as part of the VIPM executable runtime?
  24. Yes, we kind of talk past each other here. Mostly because it is an involved low level topic. And VIPM is in this specific case technically pretty innocent. It uses the private Read Linker Info and Write Linker Info methods in LabVIEW to retrieve the hierarchy of VIs, relocating them on disk to a new hierarchy, which is quite a feat to do properly, patching up the linker info to correctly point to the new locations and writing it back. The problem is that the paths returned by Read Linker Info are nicely de-wildcarded, which is a good thing if you want to access the actual files on disk (the standard file functions know nothing about shared library wildcard names), but a bad thing when writing back the linker info. The Read Linker Info itself does not deliver the original wildcard name (it could theoretically do so in the "delimited name" string for shared libraries as that name is otherwise not very useful for shared libraries, but doesn't do so, which is in fact a missed opportunity as the fully resolved name is already returned correctly in the path). The only way to fix that consistently across multiple LabVIEW versions is to go through the array of linker infos after doing the Read Linker Info, recursively determine the callers that reference a specific shared library, enumerate their block diagram for any Call Library nodes, retrieve their library name and replace the "delimited name" in the linker info with the file name portion of that! Then before writing back the Linker Info, replace the file name in the path, respectively in newer LabVIEW versions you have the additional "original Name" string array when writing back the Linker Info, so might be possible to pass that name in there. Quite an involved modification to the existing OpenG Builder library.
  25. This works on Linux too, and did so in the past for the OpenG ZIP library. I'm not really sure anymore what made me decide to go with the 32/64 bit naming but there were a number of requests for varying things such as being able to not have the VIs be recompiled after installation, as that slows down installation of entire package configurations. And the OpenG Builder was also indiscriminately replacing the carefully managed lvzlib.* library name with lvzlib.dll in the built libraries so that installation on Linux systems created broken VIs. In the course of trying to solve all these requirements, I somehow come up with the different bitness naming scheme with the LabVIEW wildcard support, but I think now that that was a mistake. In hindsight I think the "only" things that are really needed are: to leave the shared library name (except the file ending of course) for all platforms the same the simple modification in the VIPM Builder library to replace the shared library file name extension back to the wildcard add individual shared libraries for all supported platforms to the OGP/VIP package. But since VIPM has only really support for individual system platforms but not the bitness of individual files, we need to name the files in the archive differently for the bitness and let them install both on the target system with their respective name create a PostInstall hook that will rename the correct shared library for the actual bitness to the generic name and delete the other one Of course with a bit more specific bitness support in VIPM it would be possible to define the shared library to install based on both the correct platform and bitness, but that train has probably long departed. Will have to rethink this for the OpenG ZIP library a bit more, and most likely I will also add an extra directory with the shared libraries for the ARM and x64 cRIOs in the installation. Will require to manually install them to the cRIO target for the library to work there but the alternative is to: have the user add a shared network location containing the according opkg packages to the opkg package manager on the command line of the cRIO do an opgk install opgzip<version> on that command line That's definitely not more convenient at all!!!
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.