Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,903
  • Joined

  • Last visited

  • Days Won

    269

Everything posted by Rolf Kalbermatter

  1. Not simply like that. It was part of project work so copyright is an issue. And it is definitely a to big project to tackle as a one man show. One complication with OpenCV is that the newer API is C++ based so you will not get around to creating an intermediate shared library either. And unfortunately the IMAQ control does not seem to work on non Windows platforms eventhough it is mostly a built in control in LabVIEW itself and not provided by the IMAQ Vision software, so there is the need to add an additional element to handle efficient display of images in some form of external window and that is thanks to different windows managers not always straight forward.
  2. Actually even Vision is an option under Linux through OpenCV, although far from an out of the box experience as with NI IMAQ Vision. I've been using OpenCV on Windows in several proof of concept apps and been dabbling with the Linux side of that a little. It definitely looks like W10 is not going to be used much for any real industrial application platform. Maybe the embedded variant has some more customization options, but that has its own bucket of complications. Microsoft really doesn't know what they really want to do. This one is interesting but based on a platfom that is autoupdating forcefully it seems more useful to go directly to Linux for industrial applications.
  3. Well yes and he is still right! Nobody forced you to write a V4L interface library for LabVIEW! And besides, think about how much C code you would have written if you had to write the UI part for all this too! But in your tracing of the C macros I actually fail to see how the bitness would get into play.
  4. My first advice besides debbugging as poiinted out by Yair, would be to try to communicate from the same LabVIEW version first. While the LabVIEW flattened format is designed to stay compatible across versions, variants are a very special beast that have a much more complicated data structure than most other LabVIEW datatypes. There is a serious chance that flattened Variants are not always binary compatible between LabVIEW versions.
  5. I would agree with ensegre. Macros with parameters may look like functions but they are not!. It's more evidenced by the uppercase name and the prepended underscore. It's sure not a guarantee, but it is common practice to name constants and constant macros with uppercase letters, and function names with all lowercase (or in Windows with the CamelCase names). While it may seem a bad idea to make the constant different depending on the bitness this is probably made necessary by the fact that the structures can contain pointers which changes the size of the structure and the _IOWR() macro indicates actually a device driver ioctl() value and device drivers have always the bitness of the kernel even if the application which calls them may have a different bitness (and the driver is required to recognize that and translate any pointers accordingly to its 64 bit flat memory model.
  6. https://sourceforge.net/projects/labpython/files/labpython/1.2/ should be compatible back to LabVIEW 6.1 or so. But it will not work with the newer Python releases at all, Python 2.6 32 bit is probably the highest version that has any chances to work.
  7. Well with the DSC functions you can create the OPC UA Items. Reading an ASCII formatted spreadsheet file is trivial and really should not pose any problems after reading the resources that JKSH has posted. Implementing the logic about grouping items according to their folder that is not really a LabVIEW problem but simply a classical programming problem about sorting/grouping items in a list. As such I'm not inclined to do the programming work for you. It's not fundamentally difficult but it is a lot of nitty gritty work to do for me just as well as for you and it's your problem to solve.
  8. I can't really give you more help here than what I added in my last post. LabVIEW out of the box only supports querying and updating OPC items, configuration is entirely manual. If you want some programmatic configuration capabilities you need to either get the LabVIEW DSC toolkit or another 3rd party OPC UA interface library.
  9. Crosspost http://www.labviewforum.de/Thread-OPC-UA-Labview-Reading-items-and-properties-from-excel-sheet-or-text-file and http://forums.ni.com/t5/LabVIEW/OPC-UA-Labview-Reading-items-and-properties-from-excel-sheet-or/td-p/3295449 As to your specific question, standard LabVIEW only supports querying and updating OPC UA items programmatically. Configuration of them is manual. There is the LabVIEW DSC Toolkit which also supports some limited programmatic configuration of OPC UA items, but not as extensive as it supports the old OPC items.
  10. Hmm, using an asynchronous protocol like OPC UA for things like phase shift calculation is already your first problem. OPC UA at least in single update mode (it may have also streaming modes but I don't think they are exposed in LabVIEW at all if they even exist) has absolutely no synchronisation between the sender and receiver nor between the two channels. It's a publish-subscriber system where the sender publishes the data to be read by the receiver when it suits him. So the actual relation between the two signals and between sender and receiver in terms of timing is at best uncertain. That makes phase shift analysis between two signals about as reliable as measuring voltage with your finger tips! You should really look at a different protocol where the two channels are transmitted synchronously (that means over a network almost always together in the same data stream and in streaming mode so that the time information is inherent in the data stream. Otherwise you just see some random phase shifts that have more to do with the reliability of your network and the OPC UA stack than anything in your actual signals unless your signal has a frequency of not more than a few mHz, then the accuracy of the network and protocol are probably enough to allow you at least some estimation of phase shift between the signals.
  11. This function most likely was added at some point for TestStand to call LabVIEW test adapters. But I'm pretty sure they use different functionality in newer versions. And using this has a high change of breaking at some point. NI controlling both the caller as well as the callee and not having documented that function in any way, can easily decide to change something about this interface. They won't do it just to pester you, as the work to make these changes is significant, but if there is a technical or business reason to do it, they will and will not even acknowledge your complaints of breaking this functionality. I've been investigating the possibilities of calling VIs directly from a DLL myself in the past and came across this and some other functionality but simply didn't feel it was worth spending to much time on it if it was not somehow publically acknowledged by NI. It would have been a very handy thing to use in LuaVIEW as it would make callbacks into LabVIEW code so much easier for the user than what it does now. And it would most likely allow multiple turnarounds between Lua and LabvIEW code which LuaVIEW currently disallows to even attempt because of the stack frame handling which gets completely out of control if we would try to allow that.
  12. I really don't know the details. A VI reference is simply a somewhat complex index into a list of VI data structures. This VI data structure is a huge C struct that contains all kinds of data elements including pointers to pointers to pointers to pointers of various data. Some of this is the diagram heap, the front panel heap, the data space heap, and the compiled code heap. And this structure changes with every LabVIEW version significantly since there is no external code ever directly accessing it. Interfacing anything in this structure directly is a complete nogo as it will be totally incompatible with every new LabVIEW version, even service releases and potentially bug fix releases. And the actual machine code is not just a function pointer segment that you can jump too with the parameters on the stack, instead the code directly refers to it's parameters through VI internal structures from the conpane and all. Setting that up correctly on your own is definitely a way into total crazyness.
  13. Can you be more clear? Which function? RTSetCleanupProc() only indirectly has a reference to the current top level VI, this function doesn't take a VI reference in any way, It determines that internally.
  14. But the three callbacks all have one argument. It's the InstanceDataPtr. This is a pointer to a pointer sized variable that is stored by LabVIEW for each CLN callsite. LabVIEW provides you that storage, what you store in there is up to you. It could be an integer that allows you to identify the resource associated to this callsite or a ponter to a structure as simple or complicated as you wish. The reserve function is called when the diagram is iniitialized. the unreserve before the diagram is unloaded and the abort when the hierarchy that this CLN callsite is located in is aborted. And yes you can configure the CLN to pass exactly this InstanceDataPtr also to the actiual function. It won't show up as a parameter on the CLN icon but LabVIEW will pass exactly this pointer to the function. So your reserve function allocates some datapointer to identify the callsite, the actual function stores whatever it wants into that pointer and the abort function checks if there is anything in there that needs to be aborted, canceled or whatever. The unreserve function needs to deallocate any resources that were allocated in either the reserve or run function and not yet cleared by the abort function. Sounds pretty much like what the undocumented RTSetCleanupProc() function does: enum { /* cleanup modes (when to call cleanup proc) */ kCleanRemove, kCleanExit, /* only when LabVIEW exits */ kCleanOnIdle, /* whenever active vi hierarchy goes idle */ kCleanAfterReset, /* whenever active vi goes idle after a reset */ }; typedef int32 (_FUNCC *CleanupProcPtr)(uintptr_t resource); TH_REENTRANT int32 _FUNCC RTSetCleanupProc(CleanupProcPtr proc, uintptr_t resource, int32 mode); Basically this function can be used to register a pointer sized integer with according cleanup function pointer, that will be called with the integer parameter whenever the event that is used as mode happens. This is supposedly the functionality that is used by most LabVIEW refnums. They all use the kCleanOnIdle mode. Don't forget to call this function again with the same function pointer and pointer sized integer and the kCleanRemove mode whenever you are finished with this resource so that the cleanup entry gets removed from the internal list. And yes these entries seem to be stored in a linear list so it might be a good idea not to flood that list with unneccessary resource entries in order to keep your LabVIEW system responsive. If you find this fact disturbing or worse, please forget that you heard about this whole funcitonality.
  15. There is one publically available interface that could allow something like this although not very well documented. It is the callback configuration in the Call Library Interface Node. Basically every Call Library Node can register "callbacks" that are called when the diagram on which the Call Library Node is placed is initialized and deinitialized. And all these callbacks have a pointer parameter that can also be configured to be passed to the actual Call Library function too. It's not straightforward but a lot easier than trying to figure out code signatures to call into internal functions that might vary from LabVIEW version to version.
  16. From your post in for the DCG in the code repository I take it that you are talking here about the THThreadCreate() and friends C API. I'm afraid that there is not really a way to make the IDE aware of this thread in any way. These functions are only thin wrappers around the platform thread managment functions (CreateThread on Windows, pthread_create on Linux, etc.). As such they are used by the LabVIEW kernel to manage threads in a way that makes the actual kernel almost completely independent of the underlaying platform API, but are on such a low level that the IDE is no aware about them unless it created them themselves. Basically calling any of these LabVIEW manager functions (memory, file, thread, etc) is more or less equal to directly calling the underlaying system API directly but with the advantage that your C code doesn't have to worry about different APIs when you try to compile it for another LabVIEW target like Linux or Mac OSX. If you only want to work in Windows, calling CreateThread() directly is actually the more direct and simpler way of doing this. What is your actual issue about wanting to have the IDE be aware of your created threads?
  17. Wow that graph certainly looks not very logical. There is absolutely nothing in the LabVIEW world which would explain the huge activity increase in 2011 and 2013. And your suspicion of spam activity probably is founded. Seems like somewhere in 2012 someone started to do some tests to launch a huge spam attack (most likely to many other foras too) in 2013 and then towards the end of the year ramped up in one last huge effort to try to make it a success and killing the "business model" definitely with that. The interesting data is likely more in the baseline where you can see that a somewhat steady number dropped to virtually 0 after the first spam attack and is nowadays just barely above 0 which would be indeed significantly lower than at the start of the graph in 2010.
  18. It's a general tendency I have noticed on LabVIEW forums, although maybe less so on the NI forums. I also signed up at the German LabVIEW forum and while 5 or 6 years ago you had every day several new topics varying from the newbie question of how to do a simple file IO operation to the advanced topic of system architecture and interfacing to external code, nowadays it is maybe a tenth of that with most topics ranging in the more trivial category. The German forum had and has a somewhat broader target range since it was equally meant for beginners and advanced users while LAVA started by the nature of its name as a forum for somewhat more advanced topics, although I would like to think that we never really gave a newbie a bad feeling when posting here, as long as that person didn't violate just about every possible netiquet there is. The German forum may have one additional effect that may contribute to it getting less traffic and that is that English has also in Germany got the defacto standard in at least engineering. But without having any numbers to really compare I would say that the German forum and LAVA have seen similar decay in number of new posts and answers in general. And yes I have been wondering about that. Where did those people go? I feel that some went to the NI forums as they got more accessible over time but I do think that a more important aspect is that LabVIEW has gotten in many places just one of many tools to tackle software problems whereas in the past it was more often THE tool in the toolbox of many developers. That is probably a somewhat jaded view from personal experience but I certainly see it in other places too when I get there during my job. And Shaun definitely addresses another point when he mentions that LabVIEW innovation has slowly been getting to the point of stagnation in the last 5 to 10 years. That would hurt specialized forums like LAVA or a local forum like the German forum most likely a lot more than the NI forum, which catches most of the more trivial user questions of how to get something done or about perceived or real bugs. I'm not sure in how far the NI forum has been seeing a similar slow down. Personally I feel it might have been getting a little less active overal in comparison but what is more apparent is the fact that there too the general level of advanced topics has been slowing down, which would be in accordance about little to no innovations in LabVIEW. The interesting things have been discussed and brainstormed about and very little new things got added that would spur new discussions. What is posted nowadays is mostly about problems when using LabVIEW or some hardware or how to get a simple job down quickly! Is LabVIEW dead? I wouldn't feel so as I still see it used in many places but the excitement about using LabVIEW has been somewhat lost by many. It's a specialized tool to do certain things and in a way the only one in town doing it in this way, but by far not the only one you can use to do these things. In fact there have been many new and not so new possibilities about doing it (I see for instance regularly situations where people have decided to use Python for everything even the UI, which is not something I would easily consider) and the general target has been shifting too. If you want to do internet related stuff then LabVIEW is definitely not the most easy solution and also not the most powerful one. Engineering simply has been getting a lot broader in the last 10 years and while measurement and testing where you directly tinker with hardware and sensors still is quite important, there has been another big market developed that has very little to do with this and where the qualities of LabVIEW don't shine as bright or even show nasty oxidation spots. Maybe the fact that LabVIEW always was designed as a closed environment with very limited possiblities to be extended directly by 3rd parties has hurt it to the point of being forced into the niche market it always tried to protect for itself. It will be interesting to see how NI is going to attempt to fix that. The stagnation in LabVIEW development is something which supposedly happened because they focused the energy on a fundamental redesign of the LabVIEW platform, which has dragged on longer than hoped for and claimed more resources than was good for the existing LabVIEW platform.
  19. I got that too in Chrome. Thought to wait and see again later. Still an issue, So decided to clean my cache and voila!
  20. Even as Microsoft Certified Partner you do (did?) not have unlimited license allowance. After new versions of software are out, you are supposed to upgrade to the newest version within one year. The licenses for older versions get then invalid and with that any VM image using them, even if it is just a backup.
  21. Hmmm, an interesting change. Looks good and pleasant to the eyes. Congrats!
  22. Yes I'm using a somewhat enhanced OpenG Packager version for my own work. It doesn't support many things I would like but I haven't really needed them so far. But it allows simple support for selective installs. It doesn't support relocating, relinking and password protecting Vis, as that is really part of what the OpenG Application Builder was about, only that part is very much out of date as it doesn't support 8.0 and newer file types.
  23. Having been one of the initial codevelopers of the ogp file format and spec file, I wasn't really looking at RPM good enough to say that it was based on it, but we did take inspiration from the general RPM idea and tried to come up with something that worked for LabVIEW. The format itself was in some ways more based on the old application builder configuration file than any specific package manager format. As to creating an alternative to VIPM, I would think this to be almost an effort in vain. The OpenG Commander was a good thing back in the pre LabVIEW 8 days and worked fairly well for that situation but the new project, LabVIEW libraries and classes and various other file formats introduced with LabVIEW 8.0 and later really are a very different breed in many ways. Also VIPM really is a combination of the Open G Commander, Open G Package Builder and the Open G Application Builder, but then severly enhanced to handle the new file types too, which is a very complicated process and requires quite a few undocumented VI server methods, and some of them changed between versions, so it's hard to support more than two or three LabVIEW versions at all. As to saying that NI doesn't provide a good source code control solution is kind of going into the wrong direction. I haven't seen many software developer tools coming with source code control from the same manufacturer that actually work good in any way. Microsoft couldn't pull that trick and there is nothing that makes me believe that NI has even nearly as many resources available than MS. There are ways to use source code control with LabVIEW. Not ideal ones but there aren't any ideal solutions as far as I'm concerned even outside of LabVIEW. LabVIEW has a few extra hurt points as some of its files are big nasty binary blobs, that none of the source code control tools can handle efficiently but all of the modern ones can at least handle them. The more modern XML based files while being text based have a problem in that just about every source code control system will not be able to handle them consistently as simple text based merging is not enough. And context based merging is still in its infancy and doesn't even work well for other XML based files with fully known schema. But to turn around and requiring the LabVIEW files to be in a format that can be easily merged automatically is also not really realistic.
  24. I'm sorry but I'm not likely to work on this anytime soon. We use MS Office in the office, so OpenOffice is not a logical choice for me to work on, and our projects would usually require to work with MS Office too, if they require any office automation integration.
  25. Well, then your NetBEUI to IP address resolution is not working properly. Your previous comment about going directly to the network path, I was actually more refering to entirely alternative protocols such as WebDAV or (S)FTP.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.