Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,909
  • Joined

  • Last visited

  • Days Won

    270

Everything posted by Rolf Kalbermatter

  1. The address is whatever IP address the computer has on which the IDE runs, or localhost if it is on the same computer. The port number is configurable in the LabVIEW Options, and you need to enable the TCP/IP interface to VI Server in there too, in order for it to work (but if you ever have used VIPM to install some software package into LabVIEW, that is probably already enabled).
  2. Have you checked out the BLT Toolkit? It's already done and readily avialable on the LabVIEW Tools Network.
  3. You need to provide the Application Open function with an address and port number to connect to, otherwise it will simply connect to its own instance, which is why you only see the VIs in your executable.
  4. Well I misunderstood you there then! But for Linux you have inotify() or the older dnotify() to do something similar than with FindFirstChangeNotification() under Windows. inotify() is present since around kernel 2.6.13 and glibc 2.4 and working from glibc 2.5, so nowadays there should be no reason to have to use the inferior dnotify() functionality.
  5. See here: http://www.mail-archive.com/sqlite-users@mailinglists.sqlite.org/msg53058.html It comes from the fact that SQLLite is a file based database, not a server based one. The SQL Lite shared library is the entire SQL Lite database engine, but as DLL it gets loaded into each process seperately and does not share any state from another SQL Lite shared library engine in another process. It does use file range locking (when enabled in the build) in order to maintain some kind of consistency even if two processes happen to modify the data concurrently. And it apparently does at least under Windows (when enabled in the build) use file notifications to get informed about changes from other processes but in order to detect what has changed it then has to read in all the data again and update its internal management information so I'm not sure why it would work under Windows like Shaun claims. Basically the fact that SQL Lite is a file based database more or less makes what you want pretty impossible. The solution most people have come up with is to add an extra table which stores records about changes to tables and then query that regularly, possible with a file change notification callback mechanisme to avoid to much polling.
  6. Either way to make this useful you would want to have this translated into a user event and that will require the creation of an external shared library which can install that callback which then is translated into a user event through the LabVIEW PostLVUserEvent() C manager API function. As the current interface goes to great lengths to avoid having to create an intermediate shared library this is not a trivial addition to the library but a very significant change, especially since every supported platform will require the creation of its own shared library (Windows 32 and 64 bit, Linux 32 and 64 bit, and MacOSX 32 and 64 bit makes already 6 different shared libraries not to mention the extra at least 4 cRIO flavours). And I might be misreading the documentation for that function but it does not call the callback for a specific table but for any row update, insert or delete in any rowid table for the current database connection. But not for tables without rowid.
  7. Not simply like that. It was part of project work so copyright is an issue. And it is definitely a to big project to tackle as a one man show. One complication with OpenCV is that the newer API is C++ based so you will not get around to creating an intermediate shared library either. And unfortunately the IMAQ control does not seem to work on non Windows platforms eventhough it is mostly a built in control in LabVIEW itself and not provided by the IMAQ Vision software, so there is the need to add an additional element to handle efficient display of images in some form of external window and that is thanks to different windows managers not always straight forward.
  8. Actually even Vision is an option under Linux through OpenCV, although far from an out of the box experience as with NI IMAQ Vision. I've been using OpenCV on Windows in several proof of concept apps and been dabbling with the Linux side of that a little. It definitely looks like W10 is not going to be used much for any real industrial application platform. Maybe the embedded variant has some more customization options, but that has its own bucket of complications. Microsoft really doesn't know what they really want to do. This one is interesting but based on a platfom that is autoupdating forcefully it seems more useful to go directly to Linux for industrial applications.
  9. Well yes and he is still right! Nobody forced you to write a V4L interface library for LabVIEW! And besides, think about how much C code you would have written if you had to write the UI part for all this too! But in your tracing of the C macros I actually fail to see how the bitness would get into play.
  10. My first advice besides debbugging as poiinted out by Yair, would be to try to communicate from the same LabVIEW version first. While the LabVIEW flattened format is designed to stay compatible across versions, variants are a very special beast that have a much more complicated data structure than most other LabVIEW datatypes. There is a serious chance that flattened Variants are not always binary compatible between LabVIEW versions.
  11. I would agree with ensegre. Macros with parameters may look like functions but they are not!. It's more evidenced by the uppercase name and the prepended underscore. It's sure not a guarantee, but it is common practice to name constants and constant macros with uppercase letters, and function names with all lowercase (or in Windows with the CamelCase names). While it may seem a bad idea to make the constant different depending on the bitness this is probably made necessary by the fact that the structures can contain pointers which changes the size of the structure and the _IOWR() macro indicates actually a device driver ioctl() value and device drivers have always the bitness of the kernel even if the application which calls them may have a different bitness (and the driver is required to recognize that and translate any pointers accordingly to its 64 bit flat memory model.
  12. https://sourceforge.net/projects/labpython/files/labpython/1.2/ should be compatible back to LabVIEW 6.1 or so. But it will not work with the newer Python releases at all, Python 2.6 32 bit is probably the highest version that has any chances to work.
  13. Well with the DSC functions you can create the OPC UA Items. Reading an ASCII formatted spreadsheet file is trivial and really should not pose any problems after reading the resources that JKSH has posted. Implementing the logic about grouping items according to their folder that is not really a LabVIEW problem but simply a classical programming problem about sorting/grouping items in a list. As such I'm not inclined to do the programming work for you. It's not fundamentally difficult but it is a lot of nitty gritty work to do for me just as well as for you and it's your problem to solve.
  14. I can't really give you more help here than what I added in my last post. LabVIEW out of the box only supports querying and updating OPC items, configuration is entirely manual. If you want some programmatic configuration capabilities you need to either get the LabVIEW DSC toolkit or another 3rd party OPC UA interface library.
  15. Crosspost http://www.labviewforum.de/Thread-OPC-UA-Labview-Reading-items-and-properties-from-excel-sheet-or-text-file and http://forums.ni.com/t5/LabVIEW/OPC-UA-Labview-Reading-items-and-properties-from-excel-sheet-or/td-p/3295449 As to your specific question, standard LabVIEW only supports querying and updating OPC UA items programmatically. Configuration of them is manual. There is the LabVIEW DSC Toolkit which also supports some limited programmatic configuration of OPC UA items, but not as extensive as it supports the old OPC items.
  16. Hmm, using an asynchronous protocol like OPC UA for things like phase shift calculation is already your first problem. OPC UA at least in single update mode (it may have also streaming modes but I don't think they are exposed in LabVIEW at all if they even exist) has absolutely no synchronisation between the sender and receiver nor between the two channels. It's a publish-subscriber system where the sender publishes the data to be read by the receiver when it suits him. So the actual relation between the two signals and between sender and receiver in terms of timing is at best uncertain. That makes phase shift analysis between two signals about as reliable as measuring voltage with your finger tips! You should really look at a different protocol where the two channels are transmitted synchronously (that means over a network almost always together in the same data stream and in streaming mode so that the time information is inherent in the data stream. Otherwise you just see some random phase shifts that have more to do with the reliability of your network and the OPC UA stack than anything in your actual signals unless your signal has a frequency of not more than a few mHz, then the accuracy of the network and protocol are probably enough to allow you at least some estimation of phase shift between the signals.
  17. This function most likely was added at some point for TestStand to call LabVIEW test adapters. But I'm pretty sure they use different functionality in newer versions. And using this has a high change of breaking at some point. NI controlling both the caller as well as the callee and not having documented that function in any way, can easily decide to change something about this interface. They won't do it just to pester you, as the work to make these changes is significant, but if there is a technical or business reason to do it, they will and will not even acknowledge your complaints of breaking this functionality. I've been investigating the possibilities of calling VIs directly from a DLL myself in the past and came across this and some other functionality but simply didn't feel it was worth spending to much time on it if it was not somehow publically acknowledged by NI. It would have been a very handy thing to use in LuaVIEW as it would make callbacks into LabVIEW code so much easier for the user than what it does now. And it would most likely allow multiple turnarounds between Lua and LabvIEW code which LuaVIEW currently disallows to even attempt because of the stack frame handling which gets completely out of control if we would try to allow that.
  18. I really don't know the details. A VI reference is simply a somewhat complex index into a list of VI data structures. This VI data structure is a huge C struct that contains all kinds of data elements including pointers to pointers to pointers to pointers of various data. Some of this is the diagram heap, the front panel heap, the data space heap, and the compiled code heap. And this structure changes with every LabVIEW version significantly since there is no external code ever directly accessing it. Interfacing anything in this structure directly is a complete nogo as it will be totally incompatible with every new LabVIEW version, even service releases and potentially bug fix releases. And the actual machine code is not just a function pointer segment that you can jump too with the parameters on the stack, instead the code directly refers to it's parameters through VI internal structures from the conpane and all. Setting that up correctly on your own is definitely a way into total crazyness.
  19. Can you be more clear? Which function? RTSetCleanupProc() only indirectly has a reference to the current top level VI, this function doesn't take a VI reference in any way, It determines that internally.
  20. But the three callbacks all have one argument. It's the InstanceDataPtr. This is a pointer to a pointer sized variable that is stored by LabVIEW for each CLN callsite. LabVIEW provides you that storage, what you store in there is up to you. It could be an integer that allows you to identify the resource associated to this callsite or a ponter to a structure as simple or complicated as you wish. The reserve function is called when the diagram is iniitialized. the unreserve before the diagram is unloaded and the abort when the hierarchy that this CLN callsite is located in is aborted. And yes you can configure the CLN to pass exactly this InstanceDataPtr also to the actiual function. It won't show up as a parameter on the CLN icon but LabVIEW will pass exactly this pointer to the function. So your reserve function allocates some datapointer to identify the callsite, the actual function stores whatever it wants into that pointer and the abort function checks if there is anything in there that needs to be aborted, canceled or whatever. The unreserve function needs to deallocate any resources that were allocated in either the reserve or run function and not yet cleared by the abort function. Sounds pretty much like what the undocumented RTSetCleanupProc() function does: enum { /* cleanup modes (when to call cleanup proc) */ kCleanRemove, kCleanExit, /* only when LabVIEW exits */ kCleanOnIdle, /* whenever active vi hierarchy goes idle */ kCleanAfterReset, /* whenever active vi goes idle after a reset */ }; typedef int32 (_FUNCC *CleanupProcPtr)(uintptr_t resource); TH_REENTRANT int32 _FUNCC RTSetCleanupProc(CleanupProcPtr proc, uintptr_t resource, int32 mode); Basically this function can be used to register a pointer sized integer with according cleanup function pointer, that will be called with the integer parameter whenever the event that is used as mode happens. This is supposedly the functionality that is used by most LabVIEW refnums. They all use the kCleanOnIdle mode. Don't forget to call this function again with the same function pointer and pointer sized integer and the kCleanRemove mode whenever you are finished with this resource so that the cleanup entry gets removed from the internal list. And yes these entries seem to be stored in a linear list so it might be a good idea not to flood that list with unneccessary resource entries in order to keep your LabVIEW system responsive. If you find this fact disturbing or worse, please forget that you heard about this whole funcitonality.
  21. There is one publically available interface that could allow something like this although not very well documented. It is the callback configuration in the Call Library Interface Node. Basically every Call Library Node can register "callbacks" that are called when the diagram on which the Call Library Node is placed is initialized and deinitialized. And all these callbacks have a pointer parameter that can also be configured to be passed to the actual Call Library function too. It's not straightforward but a lot easier than trying to figure out code signatures to call into internal functions that might vary from LabVIEW version to version.
  22. From your post in for the DCG in the code repository I take it that you are talking here about the THThreadCreate() and friends C API. I'm afraid that there is not really a way to make the IDE aware of this thread in any way. These functions are only thin wrappers around the platform thread managment functions (CreateThread on Windows, pthread_create on Linux, etc.). As such they are used by the LabVIEW kernel to manage threads in a way that makes the actual kernel almost completely independent of the underlaying platform API, but are on such a low level that the IDE is no aware about them unless it created them themselves. Basically calling any of these LabVIEW manager functions (memory, file, thread, etc) is more or less equal to directly calling the underlaying system API directly but with the advantage that your C code doesn't have to worry about different APIs when you try to compile it for another LabVIEW target like Linux or Mac OSX. If you only want to work in Windows, calling CreateThread() directly is actually the more direct and simpler way of doing this. What is your actual issue about wanting to have the IDE be aware of your created threads?
  23. Wow that graph certainly looks not very logical. There is absolutely nothing in the LabVIEW world which would explain the huge activity increase in 2011 and 2013. And your suspicion of spam activity probably is founded. Seems like somewhere in 2012 someone started to do some tests to launch a huge spam attack (most likely to many other foras too) in 2013 and then towards the end of the year ramped up in one last huge effort to try to make it a success and killing the "business model" definitely with that. The interesting data is likely more in the baseline where you can see that a somewhat steady number dropped to virtually 0 after the first spam attack and is nowadays just barely above 0 which would be indeed significantly lower than at the start of the graph in 2010.
  24. It's a general tendency I have noticed on LabVIEW forums, although maybe less so on the NI forums. I also signed up at the German LabVIEW forum and while 5 or 6 years ago you had every day several new topics varying from the newbie question of how to do a simple file IO operation to the advanced topic of system architecture and interfacing to external code, nowadays it is maybe a tenth of that with most topics ranging in the more trivial category. The German forum had and has a somewhat broader target range since it was equally meant for beginners and advanced users while LAVA started by the nature of its name as a forum for somewhat more advanced topics, although I would like to think that we never really gave a newbie a bad feeling when posting here, as long as that person didn't violate just about every possible netiquet there is. The German forum may have one additional effect that may contribute to it getting less traffic and that is that English has also in Germany got the defacto standard in at least engineering. But without having any numbers to really compare I would say that the German forum and LAVA have seen similar decay in number of new posts and answers in general. And yes I have been wondering about that. Where did those people go? I feel that some went to the NI forums as they got more accessible over time but I do think that a more important aspect is that LabVIEW has gotten in many places just one of many tools to tackle software problems whereas in the past it was more often THE tool in the toolbox of many developers. That is probably a somewhat jaded view from personal experience but I certainly see it in other places too when I get there during my job. And Shaun definitely addresses another point when he mentions that LabVIEW innovation has slowly been getting to the point of stagnation in the last 5 to 10 years. That would hurt specialized forums like LAVA or a local forum like the German forum most likely a lot more than the NI forum, which catches most of the more trivial user questions of how to get something done or about perceived or real bugs. I'm not sure in how far the NI forum has been seeing a similar slow down. Personally I feel it might have been getting a little less active overal in comparison but what is more apparent is the fact that there too the general level of advanced topics has been slowing down, which would be in accordance about little to no innovations in LabVIEW. The interesting things have been discussed and brainstormed about and very little new things got added that would spur new discussions. What is posted nowadays is mostly about problems when using LabVIEW or some hardware or how to get a simple job down quickly! Is LabVIEW dead? I wouldn't feel so as I still see it used in many places but the excitement about using LabVIEW has been somewhat lost by many. It's a specialized tool to do certain things and in a way the only one in town doing it in this way, but by far not the only one you can use to do these things. In fact there have been many new and not so new possibilities about doing it (I see for instance regularly situations where people have decided to use Python for everything even the UI, which is not something I would easily consider) and the general target has been shifting too. If you want to do internet related stuff then LabVIEW is definitely not the most easy solution and also not the most powerful one. Engineering simply has been getting a lot broader in the last 10 years and while measurement and testing where you directly tinker with hardware and sensors still is quite important, there has been another big market developed that has very little to do with this and where the qualities of LabVIEW don't shine as bright or even show nasty oxidation spots. Maybe the fact that LabVIEW always was designed as a closed environment with very limited possiblities to be extended directly by 3rd parties has hurt it to the point of being forced into the niche market it always tried to protect for itself. It will be interesting to see how NI is going to attempt to fix that. The stagnation in LabVIEW development is something which supposedly happened because they focused the energy on a fundamental redesign of the LabVIEW platform, which has dragged on longer than hoped for and claimed more resources than was good for the existing LabVIEW platform.
  25. I got that too in Chrome. Thought to wait and see again later. Still an issue, So decided to clean my cache and voila!
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.