Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,871
  • Joined

  • Last visited

  • Days Won

    262

Everything posted by Rolf Kalbermatter

  1. I'm not sure I understand the entire problem here. You may have to consider different file types differently. As long as you do not have any VI open, even if your project is loaded, just changing the VI on disk shouldn't cause troubles with LabVIEW remembering any old VI code. This assumes that the VI names and/or path don't change. You should be able to simply do a revert or branch without any strange influences. When you then open the main or any other VI from the project you should be fine. If names and or paths change, things get more complicated. But assuming that the GIT repository state was consistent at the moment the reverted or branched situation was taken as far as LabVIEW is concerned, this would also mean that the lvproj file and lvlib files have changed, because these are were the actual paths are recorded for management reasons. So if your GIT operation changes any of these files then yes you will have to find a way to reload the project and/or libraries that are affected. So if you have a way to determine that such files have changed you could attempt to do a smart reloading. Otherwise it would seem the safest option to reload the entire project every time you do an action that could potentially change the project and/or library files under the nose of LabVIEW.
  2. It's more complicated than that. When you only have the project open, no VIs are actually loaded in memory. But the project remembers all the linkage and you get into conflict hell when the location of any of the VIs changed. LabVIEW is in that respect very special as each VI is in fact its own dynamic loadable library module. You get the same DLL hell issue when you deal with multiple DLLs but this is a problem that happens at deployment of the traditional application only, not during development. Basically in conventional programming environments you do not have any dynamically linked modules like in LabVIEW. Each source code file is its own entity only loosely coupled by the name of the symbols and also only at the linker stage. The compiler does not care about if module x uses function z from module y or if that function is rather located in module xyz. Only the linker tries to fit everything together and complains if it can't match a required import to any other export symbol in the combined object files. All source code control is build around this typical scenario, not about a strongly linked system like LabVIEW is using even at development time already. Should LabVIEW adopt a loosely linked scheme like C does? Most likely not. They used to have a much less strict system before projects got introduced. LabVIEW simply matched a VI reference to the very first VI it could find on disk with the required name. That was in several ways easier to deal with than the current project approach that remembers also the exact location of each VI as it was last used and gives you conflicting items if something has changed. Quite a hassle to resolve when you know you moved a VI library on disk for some reasons. However the situation with the less strict system before projects existed caused many headaches and undesired cross linking by not so experienced LabVIEW programmers, and even many experienced LabVIEW progammers run into a mess at some point sooner or later. So the problem here is that LabVIEW has several reasons to do as it does but that all the source code control systems were never really designed to deal with these specific requirements that LabVIEW poses. You could just as much claim that it is the SCC's fault than LabVIEW's but in reality it is neither's fault. LabVIEW has certain requirements born out of of certain expectation in how it was designed to work, at a time SCC was itself in its infancy and CVS was about the highest of the feelings you could get in that area. Requiring LabVIEW to change its paradigmas to work better with modern SCC systems is not really a realistic expectations. At the same time NI is not likely going to develop its own SCC system that is specifically adopted for LabVIEW's quite unique requirements. Bigger fish than NI have failed catastrophally in that area, or is still anyone on this site forced to suffer under Visual Source Safe??
  3. Imaq USB Init.vi sounds a lot like a VI from the unsupported IMAQ for USB Webcam addon that NI has made available many years ago. This functionality has been fully integrated in the IMAQdx Image Acquisition interface in modern LabVIEW versions. Integrated meaning that the DirectX interface that most USB cameras support is now available in the standard IMAQdx Acquisition VIs, not that it provides you the exactly same VI names. Simply use the IMAQdx Open Camera function to open a connection to the camera. IMAQdx is however a licensed software which requires activation. If you want to keep using the older IMAQ interface you will have to find the IMAQ for USB camera drivers on the NI site and install it in your LabVIEW yourself. It's unsupported and there is no guarantee that it will keep working in never LabVIEW versions.
  4. Well, one big advice: Try to avoid merging as much as possible. Unlike with text sources where there is often automatic merging possible and you only have to glance over it to make sure nothing stupid has been done, graphical merging is still a fully manual job. The merging tool shows you the differences and lets you decide which changes should be copied into the master but it will not do any automatic merging. That alone is a big incentive to only really merge if there is absolutely no other way around it. We have found that it is easier to ensure that no two people are ever working on the same VI to avoid the merging hassles afterwards. Graphical merging is still in its infancy and I'm not even sure there is an easy way to reach the same level of automatic merging than with text sources. Text is fairly one dimensional in structure, while graphics are at least two dimensional and in the case of LabVIEW in fact more like 2 1/2 dimensional. Automatic text merging can still suck too, if two developers happen to make changes to the same text lines, but for LabVIEW merging the smallest unit of measure for automatic merging is still a VI.
  5. There is very little you can not do in LabVIEW. But the architecture you describe is not something that will be trivial to build. Are you planning to write your PHD about distributed cloud computing or rather the imaging aspect you described? The system you describe could easily be a whole PHD on its own and then some more. As long as you keep everything in LabVIEW, what you want to do is somewhat manageable but still a big task.Obviously because of your idea about distributed computing you will need to keep its communication network based. A fairly quick and maybe a little dirty approach would be to directly use the VI Server interface, with which you can control local VIs as well as VIs on any other computer that has LabVIEW installed and is accessible through a TCP/IP network connection.
  6. Well, I can't make any promises. The function is in the sourceforge repository and as such available for download if you use SVN directly. Building packages is an entirely different beast, which I haven't done so far and also don't really know how to put it up on the VIPM accessible download locations. It's also not very efficient to release a new package with every little change. There are other bug reports with other OpenG packages and also improvement requests, where there hasn't always been reached a broad agreement on how to solve it. Those likely would need to be consolidated and cleaned up as well, so there could be one single combined release of new OpenG packages.
  7. crosspost here http://forums.ni.com/t5/LabVIEW/kml-file-on-google-map-on-labview/td-p/2861286
  8. The second is not really a bug. The End of Line constant on Windows is rn while when entering multiple lines in a LabVIEW string LabVIEW uses a single n only, except one explicitly enters the rn sequence. So the rn does not occur and therefore won't cause the string to be split in multiple elements. Enclosed is a first attempt at fixing the "ignore duplicate delimiters?=TRUE" case. String to 1D Array.vi
  9. One thing that hits LabVIEW saves and loads quite badly are certain Virus Scanners. They seem to want to intercept every single LabVIEW disk access and that can mount up to really long delays and totally maxed out cores. Some virus scanners are worse than others but all of them have some influence. Another reason for long delays when trying to open a file can be network paths installed in the explorer workspace (either mapped to a disk drive letter or simply as network path). If some of these paths are not currently available Windows tends to run into timeout delays for various file access routines including opening the file dialog box. For some reason Windows seems to enumerate the whole desktop workspace repeatedly on such actions and queries all those paths and if the network server that these paths connect to is slow to respond or not available, that enumeration can really take several seconds and more.
  10. Not sure what you try to say here. TCHAR is a Microsoft definition that will either use 16 bit UTF characters when UNICODE is defined or 8 bit ASCII characters when UNICODE is not defined. The only way to find out is by checking the project file for the test application. Depending on the Visual C version you can either set a specific property in the compiler settings that the code should be compiled as Unicode or not or you can also define the UNICODE explicitly yourself although that is likely to cause some problems at some point as other compiler options might depend some other things on the specific setting itself and not refer to a project define of UNICODE itself. The problem is not about what Visual C uses by default for it's main function but how the test application (and ultimately the DLL) was compiled. It is also not entirely clear that _Initialize() uses char* for the parameter. The underscore in front of the function could just as well mean that the definition in the first post is not really taken from the header file but rather deducted from the Call Library Node dialog. However I did a quick check of the name mangling shown in that picture and from that it seems to be indeed char* so not sure what else could be the issue here.
  11. There is no fixed rule. Generally it is better to open it once outside the loop and then release it at the end. However some components are not written in a way that allows that without having the previous execution influence the next one, and then it is sometimes necessary to open and close the object every iteration.
  12. Fake Exec State might not do what the OP needs in this respect, it anyhow would miss the event handling for the popup menu to do anything when the user selects the menu. And Fake Exec State is known to do some very nasty things to a VI that many LabVIEW functions do not expect at all and will simply drip over with a crash or worse.
  13. FlexLM is the engine used for the LabVIEW licensing scheme, both for simple licenses as well as volume licensing. But I think your very frequent hit of the registry for a borrowed license check might be a bit excessive and caused by some kind of misconfiguration in the NI License Manager.
  14. And I have never seen a programming environment that resembles LabVIEW in just about anything , and I don't really consider this a bad thing. I'm usually more concerned about starting up an application that seems to do nothing for a very long time, only to find out that there will be eventually 10 copies of it started up after minutes of inactivity. Not to speak about the thoughts of if it may install all kinds of nasty things in the background while seemingly doing nothing. Compared to that a search dialog is a minor inconvenience and easily worked around with your own splash screen.
  15. There is another spam post in the blogs. I would, but the Report Entry button on blog posts shows a privilege error when I press it. So there seems no way to report a blog post directly.
  16. That window shows automatically after a certain amount of time when the VI hierarchy hasn't been fully loaded yet. It has done so since very early versions of LabVIEW and there is no INI file setting that I'm aware of to disable it. Most likely something has caused the load to take longer when the program was ported to a newer LabVIEW version by Mikael. The way to avoid it is to create your own splash screen with very small hierarchy that then loads the actual Main VI dynamically using VI Server Run method. That way there is a quick screen that avoids the LabVIEW search dialog and then you can take as much time for loading your actual main hierarchy as you want.
  17. That's currently mostly true due to most Win64 machines still using not more than 4GB memory per process normally. But going to rely on this will surely go and bite you in a few years when your 100GB machine is starting to crunch on 5000* 2500 pixel images in 32 bit color mode. Microsoft didn't define a handle to be pointer sized for no reasons. Internally most handles are pointers.
  18. I thought MBCS coding has exactly the attribute of not specifically using a fixed size code size for the different character elements. So I somehow doubt your claim that Asian localized LabVIEW would show double the number of bytes in String Length than it has characters. It would be most likely around double if the text represents the local asian characters but likely not exactly. And if it is structured anyway similar to UTF8 then it might actually show exactly the same amount of bytes as it contains characters if it contains western English text only. I guess I used somehow bad words to describe what I wanted to do. You are right that the crossing of memory borders is the place where string encoding needs to be taken care of. And that there are of course always problems with one encoding not translating exactly one to one to another one. As such the UTF encodings are the most universal ones as they support basically all the currently known characters including some that haven't been used for 1000 and more years. While you are right that most NI products currently use ASCII or at most Win1252 as that is the default ACP for most western Windows installations, there is certainly already a problem with existing LabVIEW applications who run on Windows installations that have been configured for different locales. There for instance, the string a DLL receives from the LabVIEW diagram can and will contain different extended ASCII characters and the DLL has to figure out what that encoding local is before it can do anything sensible with the string when it wants to be locale aware (example in case, the OpenG lvzip library which needs to translate all strings from CP_ACP to CP_OEM encoding to deal correctly (well correctly is a bit to much said here, but at least the way other ZIP utilities do) with the file names and comments when adding them to an archive and vice versa when reading them from it. Also any string written to disk or anywhere else in such a locale will look different than on a Win1252 locale when it contains extended characters. This is what I mean with the task being a nightmare for NI. Any change you guys will do, no matter how smart and backwards compatible you attempt to be, has a big potential to break things here. And it must be a huge undertaking or LabVIEW had that support already since 7.1!!! And one plea from my side, if and when NI adds Unicode string support please expose a C manager interface for that too!!!!
  19. I don't have any experience with them, but it doesn't surprise me that there are issues with that. Such managers would have to hook deep into the Windows graphic subsystem. Since Microsoft never really has planned for such an interface there are many issues to be expected since such a manager will have to intercept various low level windows messages, some of them not even documented anywhere and also with differences between various Windows versions. So they will all have some issues somewhere as they will almost never be able to fully transparently hook into the Windows graphic subsystem. X Windows which has actually a quite clear client server architecture and therefore a well documented interface between the two is also having such issues depending on X server and windows manager version and implementation since even there the servers and clients don't always implement every functionality fully correct. Imagine how harder it must be to create a properly working window manager on a system where most of this interaction is in fact undocumented and officially unexposed.
  20. The first thing you should do is to change the library path in the Call Library Node to only explicitedly say USER32.DLL. You have currently a path in there and not just a name, and in that case the LabVIEW application Builder believes that the DLL is private to the app and adds it automatically to the data folder in your app. LabVIEW then will try to load that private DLL and call its function but the handle you do pass it comes from USER32.DLL in the system folder and has absolutely no meaning inside the private copy of USER32.DLL and therefore crashes. The problem is in fact already happening when LabVIEW tries to load its local copy of USER32.DLL. This DLL interacts on many levels with other Windows kernel internas and tries to do all kinds of stuff to initialize the Windows API system. However that conflicts with the initialization the system DLL has made when it was loaded at computer startup and therefore Windows simply shutdowns the process. After you rebuilt your app, make sure you don't end up with a user32.dll file inside your built application directory anymore. This should fix the crash in the built app. Another change you should do is to make the HWND control an U64 control, remove the U32 conversion in Get HWnd From VI Ref.vi, and change the according parameter in the Call Library Node to be a pointer sized integer. Otherwise you will run into nasty stuff again when you ever move your application to LabVIEW for Windows 64 bit.
  21. In other words to what Logman said: if you have a bag that is to small to put in 6 apples in their own little baggy, then try to put in 1 apple a time without baggy, you still won't be able to to put in all 6 apples!
  22. Performance is likely not that much of a concern in this application but I would definitely implement this as a single 64 bit integer (or maybe two 32 bit integers) and use boolean logic on it to compare and work with it. "old integer" EXOR "new integer" != 0 will tell you if something has changed and then you can eventually detect which bits changed too. It's a little bit more mathematics than trying to do it with a boolean array but works much faster and with less code.
  23. This is all nice and good as long as you can assume that you deal with ANSI strings only (with optionally an extended character set, which is however codepage dependent and therefore absolutely not transparently portable from one computer to the next). And it is not even fully right in LabVIEW now, since LabVIEW really uses multibyte (MBCS) encoding. So autinindexing over a string has a serious problem. Some would expect it to return bytes, I would expect it to return characters, which is absolutely not the same in MBCS and UTF encoding. The only way to represent a MBCS or UTF character as a single numeric on any platform would be to use ultimately UTF32 encoding, which requires 32 bit characters, but not all platforms on which LabVIEW runs support that out of the box and adding iconv or icu support to a realtime platform has some far reaching consequences in terms of extra dependencies and performance. Java internally uses exclusively Unicode and yes you have to iterate over a string by converting it to a character array or indexing the character position explicitly. And there is a strict seperation between byte stream formats and string formats. Going from one to the other always requires an explicit conversion with an optional encoding specification (most conversions also allow default encoding conversion which is usually UTF8).
  24. Currently you are quite right. However once NI adds Unicode support (if and when they do it) you will run into problems if you just assume that string == byte array. So better get used to the idea that they might not be the same. And there is in fact an INI key that adds prelimenary Unicode support to LabVIEW, however it still causes more trouble than it solves because of many reasons among which the following: The problem for NI here is that the traditional "string==byte array" principle has caused a lot of legacy code that is basically impossible to not break when adding Unicode support. There was once a discussion by AQ where he posed the radical change of string handling in LabVIEW to allow proper support of Unicode. All byte stream nodes such as VISA Read and Write and TCP Read and Write etc would change to accept byte arrays as input. And there would probably be a new string type that could represent multi byte and wide char strings while the current string type would slowly get depreciated. Difficulties here are that the various LabVIEW platforms support different types of wide chars (UTF16 on Windows, UTF32 on Unix and only UTF8 at most on most realtime systems). So handling those differences in a platform independent manner is a big nightmare. Suddenly string length can either mean byte length, which is different on different platforms or character length which is quite time consuming to calculate for longer strings. Most likely when flattening/converting strings to a byte stream format they would have to be translated to UTF8 which is the largest common denominator for all LabVIEW platforms (and the standard format for the web nowadays). All in all a very large and complicated undertaking but one NI is certainly working on in the background for some years already. Why they haven't started to change the bytestream nodes to at least accept also byte arrays or maybe better even change them to take byte arrays only, I'm not sure.
  25. Well that is of course the most easy solution but maybe not the quickest . Alternatively you could make use of the unicode.llb library in this post.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.