Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,837
  • Joined

  • Last visited

  • Days Won

    259

Everything posted by Rolf Kalbermatter

  1. LabVIEW is a little smarter than this. Data to be passed into a subVI is only copied if it branches of to some other function too that can not operate independently on that data, such as any inplace operation or another subVI. Otherwise LabVIEW schedules the function that does not potentially (it doesn't know if a subVI would modify the data, but assumes so for safety) modify the data, to execute before the subVI or any data modifying operation in order to avoid the data copy entirely. The data space of a VI has little to do with possible parameters wired into and out of the subVI but a lot with state information for the VI and things like shift registers.
  2. The problem with this is that you start to accumulate a lot of (optional) formatting hinting parameters to the function quickly. Time format, floating point and decimal format, and what else. I wonder if a single string could work, where you use a syntax like the Format into String syntax to actually list all possible formatting options. Decimal point would be always the first formatting option like %.; and other formats could be looked for based on patterns such as %<.+>T etc,
  3. I don't know much about SNMP either other than the principal operation of it but my understanding is that there exist special tools to create (compile) a MIB file from a different intermediary text description that the developer of an SNMP device creates. So if you have an existing device with SNMP implementation you should get the according MIB file from the manufacturer of the device. If you designed the device yourself only you can know what SNMP variables and OIDs you have used and will have had to dig into the whole SNMP RFCs and associated documentation anyhow to get there.
  4. Well LabPython simply loads the Pythonxx.dll and instantiates a session. As such it will work with Python still running. But it will not be able to communicate with anything in the Python process automatically, as each Python sessions is in fact a separate Python environment. If you need to communicate between a Python session and a session created through LabPython, you have to setup explicit inter-application communication, especially since LabPython runs in the LabVIEW process, while Python runs in its own process, so process memory protection applies fully here.
  5. Be aware that the second solution using InPort and OutPort is not going to work for 64 Bit Windows system, as far as I know. You may say that this is not an issue now, but there are rumors that future Windows versions will go the same path as Apple did, and will be 64 bit only, without any option to boot a 32 bit version anymore.
  6. Omega is NOT ASCII code 234. It may be so in one or several specific codepages but Windows knows literally dozens of codepages. They usually (not always) produce the same glyph for the ASCII codes 1 to 127 but have wildly varying glyphs for the codes 128 and higher. And different fonts support different codepages but are not equivalent to them. There are two ways to deal with this to be able to display more than 128 different character glyphs at the same time. Traditionally older systems used a Multibyte encoding scheme which is what LabVIEW uses too,. The second is Unicode, which is nowadays kind of common as far as platform support goes, but support on application level varies wildly with many applications still not being able to deal with Unicode properly. Also Unicode has some issues as far as collation tables and such go. There is the official standard from the Unicode consortium and the defacto standard as implemented by Microsoft in Windows. They differ in subtle but sometimes important ways, to make it very hard to write a multilanguage application that uses the same code base for Windows and non-Windows systems.
  7. Originally OpenG VIs were located under the OpenG palette only. Then, I believe Jim, figured out the Dynamic palette feature and the OpenG VIs were also added to the respective subcategories. However this dynamic palette feature was quite bristle as it didn't seem to be a designed feature and NI broke it in later versions somehow.
  8. I won't be able to spend any time on this before the christmas vacation. And at the same time I want to address Windows 64 Bit compatibility. So I would say a good estimate for a new release that includes the newest source code already present in SVN is sometimes after start of next year.
  9. As long as it is for your own use, there is nothing that forces you to do any licensing related stuff (this is about OpenG, or other Open Source, use of commercial software needs of course to obey their licensing requirements even for personal use). But once you distribute your app, commercial or not, you need to comply with the Open Source licensing requests and this means you need to add some credit information for compiled apps or leave the source code intact in the distribution. Chances that a coworker is going to fuss about this if you let him use the compiled app are rather small, but you are strictly speaking in violation of the OpenG license if you don't add some credit information somewhere in your compiled app. If however you distribute the whole thing in source, that already fulfills the license requirement.
  10. I can only confirm the upgrade pains. When it was BridgeVIEW it was a predominantly LabVIEW based system with some external Logos (Lookout) components added in. Each version removed quite a bit of the LabVIEW based components and replaced them with the newest hot from the press NI technology. These replacements were supposed to be seamless, but in practice always caused various pains. And even once replaced technologies got sometimes replaced with yet another even newer technology. All in all I can't say that I can still work with the DSC toolkit anymore as just about anything is very different than it was at one time. Also I do have developed over time my own entirely LabVIEW based data logging and monitoring system, that has many of the features of the original BridgeVIEW based system and works perfectly fine, so my need for DSC has more or less completely diminished. Additional bonus of this system is that it also ports quite nicely to RT targets like CRIO and Compact Fieldpoint. That required some initial work to make it work fine on those resource constrained systems but by now I can move an application based on that framework almost seamlessly between desktop and those RT targets. And multiple applications can communicate with each other their tag database so really distributed systems is just some extra initial configuration effort away.
  11. Not really much NI can do about it. This is not a standard socket property at all and Windows socket implementations don't even support to set it through the API. So even if NI ever decides to add a property interface to network refnums, it's not possible to change this setting from the program in any way. And the 200ms acknowledgment is a standard TCP/IP socket feature, that Microsoft implemented in Windows 2000 to conform to the standard. So the really faulty party here is the PLC that resends already after 50 ms if you want to call it a fault. Realistically it's just a workaround to guarantee that any packet is acknowledged after not more than 50ms :-) Enabling this registry setting in Windows is the only way to change this setting, and you don't want LabVIEW to change this behind your back in the registry ever! Especially since it enables this feature for any connection on that interface, which can be a real burden on network traffic if normal internet traffic also happens to go through this interface.
  12. Yes! An invisible wiring error? It's about 99.99% of the cases for me when I see such strange behavior. The wire looks as if it is coming from the tunnel but in reality comes from some other place, that resets the reference. Or there is a tunnel that uses default data if not wired, and another one on top that caries your reference from that case but goes nowhere. That is one reason that I really think twice before enabling the "Use default Data" on any tunnel.
  13. Well there is a #pragma pack(1) so it sure will work. And more importantly, this structure has no alignment issue since a float is a 4 byte entity, so together with the int it aligns the double on an 8 byte boundary, no matter what packing definition is used. Change the float to a double (and make the according change in the VI too!!!!) and the #pragma gets important again (or he also changed the default alignment in the project settings!).
  14. If it is Visual C+ it for sure uses 8 byte default alignment. So wrap the declaration of the struct between #pragma pack(1) typedef struct { .......... } Structname; #pragma pack() [/CODE] It's always a good idea to reset any alignment setting back, as otherwise you get a big mess with the order of includes changing the behavior.
  15. I think so. Leave it on sourceforge for whoever wants to look at its implementation and do his or her own stuff with it. But it probably shouldn't be an official OpenG package anymore. I started it for a quick and dirty test once, and after it got a personal thing between me and Windows, to try to force it into submission . Once that point was proven it lost its appeal for me. I learned quite a bit about Windows device driver programming with it, but mostly stuff I didn't want to really know.
  16. Most likely you don't want to call the Constructor Node at all after the first initial call from your Top Level routine. Proper solution would be to separate the Construction of the object from any other initialization that you might want to do from your plugin. A quick fix would be to wrap the Constructor Node into a case structure that is selected by the Not A Number/Refnum primitive, This will create the refnum anyhow the first time around (and even if it got somehow invalid in the meantime, which it shouldn't anymore).
  17. I personally find 2009 a rather steep step to go to. I still have projects that are maintained in 7.1.1 and I don't like to go through hoops to backconvert the code after each modification. I can look into the directory layout you mention. Can you point me to a document that describes the specifics and the exact problems this is supposed to solve? PortIO is as far as I'm concerned EOL. I don't plan to do any maintenance on that. It solves a problem that should be not even considered as a solution in modern OSes, and uses technology that is very unportable to other LabVIEW platforms, including 64 Bit Windows.
  18. The recommended way to do what you want to do, without even having to issue VI server calls to open and size the window is to simply select under Windows Appearance in the VI settings the Dialog option. This will change various specific settings, such as Show FP when called and close afterward, modal state, disable scroll bars and a few other things. Especially under Windows, some of these properties are exclusive, meaning that if you enable or disable one of them you also disable or enable something else automatically, and LabVIEW can't even do anything about it. You still can execute code to size and place the dialog if you need to, but should try to minimize the number of programmatic changes to window style modifications as much as possible, as they can unset settings you made in the configuration implicitly.
  19. This is a library implemented in C, that claims to implement the ONC/RPC protocol. It's old but fully based on the original Sun specs. This is a C# implementation and this and this are in Java.
  20. You could save the cmd intermediate process creation altogether by directly piping into the (Mercurial) command line tool. And if you use the OpenG Pipe functions you could potentially even use the same single mercurial command line instance to issue many commands after each other through a pipe to the stdin and receive any response through another pipe from stdout. And your hunch that process creation is quite an expensive task under Windows is quite right. And doing this through CMD really doubles that. But unless you need to also access Windows shell features, such as its build in commands, or locating an unqualified executable name in one of the PATH directories, it should be usually not a problem to directly instantiate the target command line app, by simply issuing its fully qualified exe name as first parameter, leaving out the cmd \c step completely.
  21. Not really. If you compare the typestring of an ActiveX refnum with another typestring of a ActiveX refnum it only is equal if both UIDs are the same. The class GUID is basically the whole class and the interface GUID is the actual interface implementation. If only the class GUID is the same, then it is not the same refnum type, but a different one (with usually different properties and methods).
  22. Thanks for pointing this out. And no I haven't run the ZLIB library on every possible target that is out there. For several good reasons: 1) I don't have all that hardware available. 2) I do have a normal daytime job to earn my living 3) LVZLIB already is targeted at Windows 32 Bit, MacOS Classic, MacOS X 32 Bit (with some issues related to long path names), Linux, Pharlap ETS, and VxWorks and I'm working on Windows 64 Bit support too. Testing the library myself on all of these systems with many different scenarios is simply not an option. What I did in case of the RT targets, was simply to assume that the ETS Pharlap system should work with the same code as the Windows system. In the case of the VxWorks system I simply compiled the according .out file and asked people to test it, which it of course didn't to start with. Thanks to a very helpful engineer at NI who had access to the (rather expensive) integrated development system for VxWorks we could debug the issues, which required actually some significant changes to the entire source code framework to make it work for VxWorks. As to the path handling: Can you positively confirm that your change will work without issues on all the aformentioned targets? If someone can confirm this to me, it would be fairly easy to make the necessary changes to the library, possibly at a place that is actually less performance intensive, such as in the actual shared library code itself. Specifically the change to the path separator should be handled in "ZLIB Common Path to Specific Path.vi" instead. It also takes care of stripping the path from any trailing seperator. So the real fix is to simply use this VI. The VI in the SVN repository does this already and I changed that in April of this year, but for some reasons, your library doesn't seem to use that VI. I'll check what the OpenG library contains and if there is indeed some problem with having the latest source code in the package. EDIT: Ok I checked. 4.0.0-2 seems to be based on the old 2.5.1 source code and doesn't contain the fix to treat the Pharlap platform as Windows platform in the "ZLIB Common Path to Specific Path.vi". But the orginal code contains definitely the "ZLIB Common Path to Specific Path.vi" in the "ZLIB Extract All Files To Dir.vi" and the only thing needed to make it work for Pharlap platforms is to add the Pharlap enumeration to the case structure inside "ZLIB Common Path to Specific Path.vi" that handles Windows 3.1 and Windows 95/NT enumeration already. No need to handle directories and files differently as the LabVIEW Build Path primitive is smart enough to work for both correctly.
  23. The code as posted in the diagram won't work at all. You can not emulate an embedded fixed size array in a cluster with a LabVIEW variable sized array at all. Instead you have to use a cluster which contains the same amount of elements of the same datatype, to match the fixed size array. So a GUID is really a cluster of type { uint32,uint16,uint16, {uint8,uint8,uint8,uint8,uint8,uint8,uint8,uint8}}
  24. This is a smart implementation but I personally would feel a little concerned about creating a potentially large XML string to get such an information. The solution from the original poster had one big problem that it only really checked for the refnum to be ANY ActiveX refnum. It should have gone further to not only compare two elements from the typestring but in fact 23 (possibly even 27 but I'm not sure those additional 4 words provide any meaningful information). Basically the ActiveX specific typestring description contains a 0x3110, 0x0, 0x4, <16 bytes for the ActiveX Class GUID>, 0x0, 0x1, <16 bytes for the Interface GUID>, and then some other stuff (usually 0x0, 0x1, 0x0, 0x0). It's quite possible that the constant numbers could change with different ActiveX refnums, but my own testing seemed to indicate that they stay the same for several different ActiveX refnums, and quite likely are really the same for the same ActiveX type. To be really right the higher significant byte of the 2nd value of the full typestring should be masked out, as it contains LabVIEW private flags about the control the wire comes from.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.