Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,837
  • Joined

  • Last visited

  • Days Won

    259

Everything posted by Rolf Kalbermatter

  1. Actually, the Widechar functions supported it since at least Windows 2000 but only with the special prefix. That registry hack and application manifest is needed to not have to use this prefix, so yes porting to Widechar functions is in either case needed to support long file paths. My library adds the special prefix and didn't have to go through manifests and registry settings to use the feature.
  2. As Shaun already more or less explained it is a multilayered problem. 1) The LabVIEW path control has internally following limitations: - a path element (single directory level or filename) can be at most 255 characters. - the path can have at most 65535 levels The only practical limit that is even remotely ever reachable is the 255 character limit per path level, but I think we all agree that if you get that long path level names you have probably other problems to tackle first. đŸ˜€ (such as getting out of the straightjacket they for sure have put you in already). 2) Traditionally Windows only supported long path names when you used the Widechar file IO functions and also only when you prepended the path string with a special character sequence. LabVIEWs lack of native support for Unicode made that basically impossible. Long path names are limited to 32000 something characters. 3) Somewhere along the line of Windows versions (7, 8?) the requirement for the special character sequence prepending seems to have relaxed. 4) Since Windows 10 you can enable a registry setting that also allows the ANSI functions to support long path names. So while theoretically there is now a way to support long path names in LabVIEW on Windows 10 this is hampered by a tiny little snag. The path conversion routines between LabVIEW paths and native paths never had to deal with such names since Windows until recently didn't support it for the ANSI functions, and there are some assumptions that paths can't get more than MAX_PATH name characters. This is simply for performance. With a maximum fixed size you don't need to preflight the path to determine a maximum size to allocate a dynamic buffer for, that you then have to properly deallocate afterwards. Instead you simply declare a buffer on the stack, which is basically nothing more than a constant offset added to the stack pointer and all is well. Very fast and very limiting! This is where it is currently still going wrong. Now reviewing the path manager code paths to all properly use dynamically allocated buffers would be possible but quite tedious. And it doesn't really solve the problem fully since you still need to go and change an obscure registry setting to enable it to work on a specific computer. And it doesn't solve another big problem, that of localized path names. Any character outside the standard 7-bit ASCII code will NOT transfer well between systems with different locales. To solve this LabVIEW will need some more involved path manager changes. First the path needs to support Unicode. That is actually doable since the Path data type is a private data type so how LabVIEW stores path data inside the handle is completely private and it could easily change that format to use whatever is the native prefered Unicode char for the current platform. On Windows this would be a 16 bit WCHAR, on other platforms it would be either a wchar or an UTF8 char. It wouldn't really matter since the only other relevant platforms are all Linux or Mac BSD based and use by default UTF8 for filenames. When the path needs to be externalized (LabVIEW speak is flattened) it always would be converted to and from UTF8 to the native format. Now LabVIEW can convert its Path to whatever is the native path type (WCHAR string on Windows, UTF8 string on other platforms) and it support long path names and international paths all in one go. The UTF8 format of externalized paths wouldn't be strictly compatible with the current paths, but for all practical purposes it would be not really worse than it is now. The only special case would be when saving VIs for previous versions where it would have to change paths from UTF8 to ASCII at a certain version. I kind of did attempt to do something like that for the OpenG ZIP library but it is hacky and error prone since I can't really go and change the LabVIEW internal data types, so I had to define my own data type that represents a Unicode capable path and then create functions for every single file function that I wanted to use to use this new path, basically rewriting a large part of the LabVIEW Path and File Manager component. It's ugly and really took away most of my motivation to work on that package anymore. I have another private library that I used in a grey past to create the LLB Viewer File Explorer extension before NI made one themselves, and I have modified that to support this type of file paths. Works quite well in fact but it is all very legacy by now. But it did have long file name and local independent file name support some 15 years ago already with an API that looked almost exactly like the LabVIEW File and Path Managers.
  3. We usually use discrete ones and just use a few digital IO ports in our E cabinet for them. The digital IO to use depends on the hardware in the E cabinet. That could be cRIO digital IOs, or a Beckhoff PLC IO or Beckhoff BusCoupler IOs, usually accessed through the ADS protocol over Ethernet. USB controlled devices don't work well for non-Windows controllers at all, since you always run into trouble to get drivers.
  4. My real life experience definitely does not support this statement. I have seen handles being returned that are bigger than 0xFFFFFFFF in value and crashing the process when treated as 32-bit value. So while this may be true for some handles it certainly isn't for all Windows handles. And yes that was about Windows handles, not some third party library declaring void* pointers as handle, that were in reality pointers to a struct (in which case not treating them as pointer sized integer certainly and positively will cause problems). I do believe that some of the Windows handles are similar to LabVIEW Magic Cookies that are basically an index into an object manager and object class specific private data list, but there certainly are various different approaches and some handles seem to be rather pointers in nature. For instance the HINSTANCE or HMODULE is basically the virtual address of where the module was loaded into memory and is sometimes used to directly access resource lists and other things in a loaded PE module (EXE and DLL) through so called RVA (Relative Virtual Address) offsets in the module image data. It's not a neat way of doing things and one should rather use functions from the debug library but sometimes that is not practical (and if you want to program not so official things it might be sometimes impossible). Of course doing it all by hand has a lot of possibilities to miss some of the complications, so that it will break with non-standard linked module files or with extensions of the PE specification for new Windows versions. Similar things apply to some COM objects like HIMAGELIST and others. They seem to be basically the actual COM object pointer that contains the COM methods virtual table directly, not some magic cookie implementation that references the COM object pointer. All the ImageList_xxxxx functions in the CommCtrl library are basically just compiled C wrappers that call the according virtual table method in the COM object. And while COM is object oriented, its ABI is defined in such a strict way that it can easily be called from standard C code too, if you have the correct headers for the COM object class. It's even possible to implement COM classes purely in C, as has been done for a long time by the Wine project, which had a policy that all code needed to be in standard C in order to be compilable on as many different platforms as possible. They relaxed that requirement in recent years as some of the MacOSX APIs can't really get easily called in other ways than Objective C, the Apple way of object oriented C programming, which was originally an Objective C preprocessor that was then putting everything through a standard C compiler anyhow.
  5. Definitely needs some love to work in LabVIEW 64-bit. This library was developed ca. LabVIEW 6i and that is loooooong before the Call Library could support pointer sized integers (LabVIEW 2009) which all the handles in there need to be in order for it to work in 64-bit LabVIEW.
  6. No it's not. A .Net DLL is not supposed to change location in build applications. .Net itself only knows really two locations by default where it will search for assemblies: - The directory in which the current executable file is located - The GAC Anything else is extra, such as an non-default AppDomain with its custom ApplicationBase. LabVIEW adds to this the option to reference Assemblies by full path (which the application builder adjusts to the location you configured the assembly to be installed to) but that path is embedded in the compiled VI and not accessible just as you can't change the path of subVIs in a compiled executable either.
  7. Not directly. But I solved that in the past by creating VIs that called the .Net (or ActiveX) nodes and then calling those VIs dynamically through VI Server. A sort plugin system with the dynamic called VIs containing the .Net or ActiveX nodes.
  8. Debugging pictures is unfortunately not possible. And without the hardware I couldn't really do much either. I can not comment to the tests your IT department did, but they likely understand even less of the problem than you do and can only do some standard tests that may or may not point out the problem. There is a lot of information in this thread about things to check. I can't really give more ideas. You will have to read through everything and test it on your system to see if you get any results. Debugging network problems is a highly specialized ability that requires to understand a lot of different things, often a lot of time, and the hardware at hand to go through the many tests and trials to hopefully end up with some indications where the problem could be and then find the solution for it. And yes it is hard. Networking has been getting ubiquitous to the point that everybody simply expects it to be present and working. In reality the techniques involved are highly complex and even simple misconfigurations can make it fail. TCP/IP with its many fall back and fail save mechanisms makes this sometimes even more complex, since it doesn't just fail flat out but still sort of works, but with much degraded performance.
  9. Buaaah, not fair. I'm still only a Rookie! đŸ˜‚
  10. Very obviously and the start date seems to be July 31, somehow (more likely whenever you first logged in after the forum update). That's at least what most of my Ranks have as granting date. Funny to see that I have managed to create both my 10th and 500th Lava post and been one month and also one year after joining Lava on that same day! Apparently the system doesn't know about a 10 year anniversary, but who knows maybe in 8 years from now it may grant the 25 years anniversary. đŸ˜ƒ
  11. That's not enough information to say anything specific. Error 66 simply means that the remote side decided to close the connection. That could be for many reasons such as: - Bad network setup that causes dropouts - DHCP configuration that causes a change of the network address so that the connection is not valid anymore and gets reset by the remote side - subnet address configuration mismatches - another network device trying to grab the network address used by either side of your current connection and a few dozen more possible problems including bad hardware in your computer, the remote device or some network infrastructure in between
  12. Sorry but with some me too message and an image we can't do much. Have you read the comments in this thread? Have you studied them and tried to apply them to your situation? What debugging steps have you done so far? What is your hardware configuration including network setup? You really will have to do the debugging yourself. There is no such thing as a free lunch when debugging your own application!
  13. For the clSerialInit you want that parameter to be a pointer sized integer, Pass: Pointer to Value. For the other function make it a Pointer sized Integer, Pass: Value. If you pass this value around through VI connector panes, always make it a (u)int64 bit integer. Don't forget to call the clSerialClose() function or whatever it is called. The clSerialInit() function allocates resources (and almost certainly opens the serial port underneath making it inaccessible for anyone else until it is closed properly). If you don't close this resource properly you would create a memory leak and likely make the port inaccessible when starting your program again.
  14. Not without clearly stating that it was nasty. Some here posted code that was to proof that you could do nasty things. But!!! There are a few things you can do to avoid that problem: - Don't ever open a VI that you downloaded from the net by double clicking it! Instead open an empty VI and drag the VI onto its diagram. -> Autorun is disabled that way and you get the chance to look at the diagram before you decide to run it. - LabVIEW 2021 will show a dialog asking you if it is ok to start the VI when directly double clicking a VI that is configured to run when opened.
  15. Actually I prefer people to backsave their VI when posting. Images are ok to add to a post, but they do NOT replace the real code which is needed to actually try it out and see where problems might be, and some important code settings can not be determined from just a pic.
  16. That's fairly paranoid considering that any VI, even when running in a PPL is basically still executing inside the same process. There are a lot more things it can do that could be much more dangerous, but you have to strike a balance between security and performance. Starting to isolate each PPL completely from the rest of the system would take up a huge amount of development effort and also cause a lot of performance loss. You wouldn't like that at all! VI server has some strict limitations when it is operating across LabVIEW contexts but limiting it even inside the same context would be to restrictive and it would also mean that you have to consider the entire scripting interface in LabVIEW as very dangerous. And yes if you use PPLs they could be swapped out by an attacker. But if that is really your concern you may have a lot of other more grave trouble. Who lets such a person even have access to that computer? Why would they attempt to attack a PPL on that system when they can have the entire cake and eat it too? It's many times easier to attack DLLs, yes even with signed DLLs, and take over the entire system, than trying to hack into a PPL with its proprietary format and only get a crude control over a single LabVIEW application on that system.
  17. I would suggest moving this into another thread. The title of this thread is about an Open Source test executive, and you clearly state that you have no plans to ever open source it.
  18. Generally you have one top level header file that declares the main APIs and then others that declare the datatypes and lower level APIs if any. In that case you point the import library wizard to the main header file and tell it in the additional include directory control where it can find the other header files. If you really have independent header files you can create a dummy master header file that includes all the individual header files and point the wizard to that dummy file. But, as I have pointed out many times, the import library wizard can't do magic despite its name. The C headers only define the bare bone datatype interface of functions nothing else. All the semantics about proper buffer handling, memory management, correct combination of multiple functions to create a real functionality is only documented (if at all) in the prosa of the documentation for the DLL library. No wizard can interpret random prosa text to extract these semantics out of it. You as programmer have to do it. This means that the VI library created by the import library wizard is almost always just a starting point for the real work. In the best case the API is very LabVIEW unfriendly (in LabVIEW you don't worry about first allocating a large enough buffer before calling a function, that is done automatically. A C DLL knows nothing about this type of things, respectively if it does its own management you are in even more trouble to call it from any non C environment, since there is absolutely no standard about how this management should be done and only in C(++) are you able to adapt to pretty much every thinkable management contract, no matter how crazy or Ă¼berengineered it is.
  19. Unless you plan to go independent with your own company as LabVIEW consultant or something, you definitely shouldn't need to invest your own money to attend some training. An employer who finds a few days of training to expensive and unnecessary, might be not the right employer for you. People here are generally quite friendly and will gladly want to help you. There are a few conditions of course. Nobody likes free loaders or home work hustlers, so if one of them finds his way here, he might get the first time a friendly reminder that he needs to show a bit more effort himself in order to get help. In the rare case that they insist to have a right to be helped no matter what, the tone can get less friendly, although yelling I haven't noticed, it usually rather turns into completely ignoring them. To be honest I haven't really noticed yelling in the NI fora either. It's not your average youtube channel and there is some moderation too, who will often step in when things really are in danger of getting out of hand, which very rarely happened in the past. But sometimes the answer can get a bit brisk and I'm sure I have been guilty of not always using my velvet gloves with certain people on there who refuse to follow advice since they know it better.
  20. And while GetTextExtendPoint32() itself may be fairly fast, you have to somehow, from somewhere get a HDC to use it on. And that HDC has to have the correct font selected into it, which is part of the time consuming effort the LabVIEW TTextSize() function does. And HDCs are quite precious resources, so creating one for every font you may ever use up front is not a good idea either. The HDC also has to be compatible with the target device, (but can't be the screen device itself as otherwise you globber the LabVIEW GUIs with artefacts).
  21. The code underneath is definitely NOT thread safe. It's concerning the Text Manager, another subsystem of the LabVIEW GUI system and the entire GUI API is UI_THREAD since the Windows GDI interface, which these functions all call ultimately weren't thread safe either back then and may in various ways still not be. Windows has some very old legacy burdens that it carries with it that Microsoft tried to work around with the Windows NT GDI system but there are a few areas where you simply can't do certain things or all kind of hell breaks loose. Now I happen to know pretty much how this function is implemented (it simply calls a few other lower level undocumented LabVIEW Text Manager functions) and incidentally they are all still exported from the LabVIEW kernel too. When you use a user font it calls TNewFont() to create a font description, then it basically calls TTextSize() to calculate the Point describing the extend of the surrounding box and afterwards it calls TDisposeFont() to dispose the font again if it created it in the first place. For the predifined fonts it skips the font creation and disposal and uses preallocated fonts stored in some app internal global. So there would be a possibility to cut down on the repeated execution time of GetTextRect() call for user defined fonts by only creating the font once and storing it in some variable until you don't need it anymore. No joy however about reducing TTextSize() execution time itself. That function is pretty hairy and complex and does quite a bit of GDI calls, drawing the text into hidden display contexts, to determine its extend.
  22. Now that's not just dreaming but fantasizing. đŸ˜€ Developing a new LabVIEW target, even if it could reuse most of the code for Windows, which I'm not sure would be the case here, is a major investment. Why would they want to do that if they already have LabVIEW NI Linux RT? And the main reason that there is no LabVIEW NI Linux RT for Desktop solution is that NI hasn't figured out (or possibly bothered to figure out) how to handle proper licensing. They do not want to enable some Chinese copycat shop to develop their own RIO hardware and sell it with the message "Just go and buy this NI software license and you are all legal". That such hardware can be developed has been proven in the past, it basically died (at least around here, maybe they sell it in tenthousends per month inside China) because such a solution did not exist and anyone using that hardware was not just operating in grey area but fully forbidden territory, with lots of legal mines in the ground.
  23. No, but without a legal license for NI-LabVIEW Realtime Runtime and all the various NI-other software, it's a complete nogo for anything else than your home automation project.
  24. Looks interesting. Reminds me a little of the layout principle that Java GUis have. Needs a bit getting used to when you are familiar with systems like the LabVIEW GUI where you put everything at absolute (or relative absolute) positions. And as Mikael says, those class constants are making this all look unnecessarily bulky. And they are in fact unnecessary if you build your class instantiation properly. The Class Instantiation methods should not need to be dynamic dispatch and then you can make that input optional or possibly even omit it. The control data type of your Create method already defines the class without a need for a class constant.
  25. If performance is of no concern for you AND you don't care about licked GUIs for your apps, Python is a good option nowadays. How long that will be before there is a new kid on the block to whom everybody is catering to and declaring everything else as from yesterday, is of course always the big question. Python with it's automatic data type system is both easy to program and easy to botch a program with. And such flexibility has of course a performance cost too đŸ˜€, as every variable access needs to be verified and classified before the interpreter can perform the correct operand on it. And while you can pack large datasets in special containers to be performant, the default Python array object (list) is anything but ideal for big data crunching. Other than that I think I would stick to C/C++ for most of the things if I would abandon LabVIEW.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.