Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,903
  • Joined

  • Last visited

  • Days Won

    269

Everything posted by Rolf Kalbermatter

  1. Right but it's a bit more complicated than that, because the same issue although less extreme exists on all other LabVIEW platforms. And there are various trouble to turn the underlaying file id into a LabVIEW refnum. On Linux LabVIEW uses the posix functions, and so does it on MacOSX 64-bit but not on 32-bit or at least not the versions I tested (it could have changed later but that would be potentially even worse). In addition while modern Linux platforms use UTF8 throughout, that was different before and still can be configured differently (although I'm hard pressured to imagine why someone would do something stupid like that). And to make matters worse there is no standardized way really to determine what codepage was used when a ZIP archive was created. There is a newer option to use UTF8 which is indicated by a flag in the file entry for each file, but if that flag is not set the entry is in whatever OEM codepage (not ANSI) the computer was using at the time the archive was created. And that could and for most problematic archives also will be a different codepage than on the computer on which you want to extract the files. It's a complicated problem and part of it is basically unsolvable.
  2. That is normal. The VIs are still in LabVIEW 7.0 format and the used File Dialog in that version did not have an error cluster. So when mutating it to 8.0 and higher LabVIEW will insert some compatiblity code on the fly. Hmmm, it works fine on my computer but it is VIPM 2014 and I'm not fully sure if I didn't do some registry fiddling in the past to fix something. I rather would like to avoid having to do a vi package which only can be installed with VIPM of the same version or newer.
  3. Here is a new version 4.2.0b1 for the ZIP library. I didn't test it in every LabVIEW version on every platform. What I did test was MacOSx 32-bit and 64-bit LabVIEW 2014, Windows 32-bit and 64-bit LabVIEW 7.0, 7.1, 8.6, 2009, 2016, 2018, Linux 32-bit 7.1 and 8.6, and NI Linux x86 LabVIEW 2016. Other realtime targets I hadn't handy at the moment. Support for Linux 64-bit and NI Linux RT ARM as well as VxWorks and Pharlap is contained. The realtime support will only get extracted when installing into LabVIEW for Windows 32-bit and through a seperate exe file that is invoked and will prompt for an administrative elevation of this installer. You then have to go into NI Max and into the Software part of your target and select to install additional components. In the list should be an OpenG ZIP Tools version 4.2.0 package visible. Select that to be installed on your target. There are still following problems that I haven't implemented/fixed yet: 1) archives that contain file names with other encodings than your platform code page will go certainly wrong. This is probably not solvable without doing absolutely every file IO operation in the shared library too, since the LabVIEW file IO functions don't support any other encodings in the path. 2) If you try to zip up directories containing soft/hard inks then the current implementation will compress the actual target file/directory into the archive instead of a link and expanding zip archives that contain such links will expand just a small text file continaing the link destination. This is something I'm looking into to solve in the next release by allowing optionally adding a special link entry into the archive and create such a link on the filesystem when extracting. This is mostly of concern on Linux and MacOSX. While Windows also allows for such links nowadays it is still quite an esoteric feature and user accessible support for it is minimal (you have to use the command line or install additional third party tools to create/modify such links). Hope to hear from other platforms and versions and how it goes there. Without some feedback I'm not going to create a release. oglib_lvzip-4.2.0b1-1.ogp
  4. Are you sure you installed the OpenG ZIP Library 4.1 on that RT target after you updated the LabVIEW version on it? Pending this issue I should be able to have a test package ready sometimes this week which is supposed to support Windows, MacOSX and Linux (all 32-bit and 64-bit), NI Linux RT (ARM and x64) ready. Support for the VxWorks and Pharlap RT targets will be available but not tested.
  5. So Microsoft tightened security once more and made Common data files not so common anymore that every user can access them? Good to know.
  6. Ohh well I missed the "snapshot" ๐Ÿ˜† But I would have to agree with everything in your post. Matlab isn't going to make this any easier at all. ๐Ÿ˜†
  7. I'll try to take that into my testing, but need to still install LabVIEW 2019 for that (and cross my fingers that it won't damage older LabVIEW versions on my computer).
  8. Are you serious? Do you want to operate those cameras in 320 * 240 pixel mode? First do some basic calculations about data throughput before starting to ask such questions. Your USB bus certainly will start to hick up if you try to transfer that many cameras at full resolution simultaneously to your computer. It's likely to cause trouble even with a lot less of cameras. USB never was intended for so many high speed simultaneous devices and even with GigE ethernet you will be getting trouble if your cameras have even remotely modern resolutions and you want to do more than 1 or two frames per second per camera. And once you get all that data into your computer you will be hard pressed to do any significant image processing in realtime on it.
  9. Sounds suspicially llike a padding mistake. I see 38 or 39 plus a multiple of 48 in those numbers.
  10. It depends what you try to do. As long as you don't try to access a particular zip or unzip sessions from multiple places in parallel, the zlib zip library has always been safe for reentrant execution as it does not contain global state that spans across sessions. The underlaying zlib library (pure compression/decompression algorithms) itself is even less of a problem as it does not have the complex archive maintenance that is needed for a ZIP archive but only works on immediate memory streams. If you try to open a zip (or unzip) session somewhere and then branch this to two different locations and try to write streams to them from both locations you are going to crash sooner or later. Each session stores state that is used across method invocations and even if I would protect the individual function calls with a per session mutex (or make the functions all execute in the UI thread) you would still potentially corrupt the zip archive stream or in the case of unzip operation retrieve a different stream than what you think you do in a particular location. As long as you don't access a specific session (zip or unzip refnum) from multiple places in parallel you were always fine though and that will remain like this in the future. This pretty much is the same as trying to read or write to a specific file from multiple places (through he same refnum or a separately opened). You can do that but expecting the read and writes to work properly and have a proper data content in the file afterwards will be pretty much impossible. There is however no problem in writing (and/or reading) in parallel to two (or more) different files on disk. So setting the VIs to shared clone should work (all the state is stored in the session behind the refnum) but I'm not going to do that for now.
  11. I probably can't test the VxWorks targets for now but can create at least the RT image to be installed for them. No guarantee that it can even load though without having tested it before myself.
  12. Vision doesn't require realtime at all. What are you really trying to do here?
  13. It most definitely does with some caveats such as what hardware your computer may use. As to licensing, NI has so far mostly avoided the answer, but from the reaction so far it is clear that they don't feel compelled to create a standalone version of NI Linux for PCs. As far as licensing is concerned, the NI Linux part itself is a no brainer, it is Linux after all and you are always allowed to rebuild that for whatever hardware you want. The more interesting part is the NI-VISA, NI-this and NI-that software and of course the LabVIEW real-time engine that you also need to have installed on such a system to be useful for targeting with LabVIEW Realtime. This clearly is NI owned software and unless you have an explicitly spelled out license that allows you to use it on such a system, you are simply violating NI copyrights if you copy any of these files to a NI Linux operated platform of your own (aside that there are technical issues such as ABI compatibility and CPU architecture/family, for instance not every ARM CPU core is able to execute the LabVIEW ARM compiled modules, you need a Cortex A or compatible CPU core, this is the more powerful type compared to Cortex M or R which are meant for deeply embedded devices or reliable security platforms, or pre Cortex era cores).
  14. I did work on that mainly end of last year but found some time to resume testing recently. The code and VIs are more or less ready but I do need to do a bit more testing on Linux, Mac and the different real-time targets. Especially Mac and the real-time targets proof to be quite a hassle. Mac because I don't work often on it nowadays and the real-time targets because debugging shared libraries on them is always quite some hassle and each flavor is again different. I could however use some extra eyes for testing and I don't mean the shared library part itself but simply the general operation of it. I might be able to create a preliminary OGP package for installation through VIPM within a week or so if you want to test it. Let me know which platforms you would want to test it on and how and I will check what I can do.
  15. Why would you want to do that? Compiling a Linux image for embedded hardware is not a trivial feat. Yes NI does provide the scripts that are necessary to do that but you also need to have the perfectly setup development system with the right version of gcc toolchain to even hope to get that working seemlessly. Even slight version differences can mean that you might have to edit scripts and all and such edits are really not for the faint at heart. You need a very deep knowledge into Linux in general and especially embedded kernel compiling of Linux in order to hope that your edits will cause anything else than more errors. Googling with errors is in these cases usually also not a solution because you mostly find only answers from other noobs who have no idea what they are doing and just post random recommendations.
  16. While itโ€™s understandable that you feel frustrated about the failure, part of multiple choice tests is to play with the wording to test if people actually read the question or just might remember answers from similar worded questions from other tests. This is fairly common among every single multiple choice test I have done so far! Is it tricky? Yes definitely! Unfair? Not really. My failures were in XControls and in the Architect terminologies, since I never did a single XControl and despite reading the Advanced Course Manual about XControl development once, just didnโ€™t remember all the different ability methods and their catches in what order they are fired, and the Architect terminologies is basically taken from some book that used a specific number of words that might or might not be the holy grail of software development. I could most likely have certified by points if I had kept a record of the events I attended and the occasional presentation. But alas I didnโ€™t. Is it to much asked to ask for those points yourself compared to expecting NI to hire a staff of a several secretaries who painstakingly go through every attendance list and try to match those entries to active members of the CLD, CLA and other certification lists? I donโ€™t think so!
  17. This has nothing to do with a CIN whatsover. CINs were a legacy technology in LabVIEW 3 and 4 before LabVIEW learned to interface to shared libraries (*.dll on Windows and *.so on Unix). What you are doing is not a wrapper either in the sense as it is usually used by me and others who are regularly dealing with this. A wrapper is another shared library written in C/C++ that interfaces to a certain API and translates it into a more LabVIEW friendly shared library interface that can be more easily interfaced with the Call Library Node. You are trying to create a VI interface library to your shared library. And that is always a tricky business. For one the Call Library Node can't interface to every C feature out there. C++ object interfaces, callback pointers and also complex structures with embedded pointers are all things that can't be done with the Call Library Node (or in the case of complex structures with embedded pointers only with a lot of pain and by handcoding in LabVIEW what a C compiler would normally do mostly automatically). The import library wizard you are trying to use for this is an amazing piece of software but despite its name NOT a magician. And extracting all the necessary information from a header to safely interface LabVIEW to a C shared library would be a truely magical feat, since the C syntax does not cover all the necessary details about buffer management and such things. This is only documented (if you are lucky) in the prosa library documentation that comes with your shared library. If you are unlucky you have to figure it out by guessing from naming conventions of variable names and lots of trial and error (meaning: crashing, restarting, editing, saving, crashing, goto begin). This is true for using a shared library in C just as much as in any other language including LabVIEW. So even if the import library wizard can import all your functions you really have to go through each generated VI and manually check that the generated code is actually correct. Also the generated code is in an attempt to be rather safe than sorry often unneccessarily inefficiently programmed, which is another thing you should be changing. Of course this all requires that you know exactly how the shared library should be interfaced and if you really do that you end up very quickly with the constation that creating all those interface VIs from scratch by hand is not only about as fast as going through the import library wizard and then painstakingly check each of the VIs by hand, but also creates more efficient interface VIs with something more meaningful than the ugly standard light blue import library wizard icons and totally unpractical connector panes. The import library wizard for instance can't know that in a function like: int32_t MyAwesomeBufferReader(int32_t *buf, int32_t bufSize, int32_t *bufRead) the second parameter is the size of the passed in buffer and the third is the size of how many data was actually filled in the buffer. It doesn't even know that the first is not just a pointer to a single int value but rather a pointer to an array. The C syntax does not distinguish between a pointer to a scalar and a pointer to an array, except that it allows to usually write int32_t MyAwesomeBufferReader(int32_t buf[], int32_t bufSize, int32_t *bufRead) to show that the first parameter is actually an array, but this is not used by many programmers despite its increased readability (probably because some ancient C compilers didn't know about this syntax with an incomplete array size and some libraries are still written to compile even on computers that you can only see in musea anymore). The first variant will likely default to a VI with an int32 value "bufSize" as input and two int32 value "buf" and "bufRead" as output and if you select the option to treat return values as error it will enter the return value into the error cluster as an error if it is not 0. It will also corrupt memory (and likely crash at some point) whenever being called with a bufSize value greater than 1! The proper VI (just going from the somewhat arbitrary naming of the parameters) has however an int32 "number of samples to read" as input that says how much data should be read, and an int32 array output. Before calling the shared library function the interface VI should allocate an array of "number of samples to read" in length and on return of the function should use the "bufRead" value to resize the array to the actually returned data and then pass it back through the array output of the connector pane . The interface VI should hide all the buffer management details from any caller of the VI as otherwise you are going to expect every user of your VI to know not only about C programming details in general but about the actual programming interface of your shared library function. A totally unusable LabVIEW VI as such! No automatic wizard in the world could be thought this in any way, and even what I just wrote is usually only a reasonable guess based on the parameter naming, which in C has absolutely no meaning in any way. (The actual parameter names can be left away completely in the function prototype without any adverse effect when using the function, and the names in the function prototype can be also completely different to the names in the actual function declaration, if the programmer wants to do that to obfuscate his code even more.) The actual library documentation would hopefully explain in detail what each parameter is meant to mean but you can't point an automated tool at a PDF or HTML file and tell it to extract any exact programming information from the prosa text in there.
  18. While you bring up valid points here, I think there is no standard whatsoever in the community about this so far. Everybody does as he or she feels at that particular time of the day and may do it different the next day. So if you want to bring this up, it may be a good idea to document it somewhere in the wiki. Basically start some sort of recommended style guide there. Obviously not everyone will read it and even if someone does he won't be shot if he does not follow it. ๐Ÿ˜€ About your last point I would personally prefer to have different sections on the same page for now. I also don't see deprecation of pages going to happen anytime soon and definitely not as a separate task that anyone would take upon himself. If something will be deprecated in the future it will be likely as part of editing a page for other purposes such as adding information about new features or similar. And it is anyhow at least 7 years in the future as NI committed at some time to a support timeframe of 10 years for LabVIEW CG after the first NXG version was released. ๐Ÿ˜ With the current progress of LabVIEW NXG that might be about the time it reaches feature dominance over LabVIEW CG. ๐Ÿ˜†
  19. Well in that case the remark about the DLLs having to be compiled for ARM was really off. That is for Windows IoT installations on targets like the RPi and similar boards which all have an ARM CPU (and accordingly can't run Windows IoT Enterprise either which is a pure x86/x64 install). It all depends on which C compiler they used to create those DLLs. Until Visual C 2015 or so each Visual C version come with it's own specific C runtime library that had to be installed on every target on which you wanted to run an executable or DLL created with it. While many parts of Windows are compiled with Visual C too and therefore cause the Windows installation to come with the needed C runtime support already installed this can and will vary depending on the Windows version and the amount of extra tools and utilities that you install. Also any extra custom application you install such as LabVIEW also comes of course with the necessary C runtime support that gets installed if not already present on the system, but depending on all this a particular C runtime version may or may not be present on any particular system. Basically you should never copy DLLs to a target system but install them with the proper installer for them which hopefully takes care about installing the correct C runtime support too.
  20. Windows IoT is not a normal Windows installation at all, except the Windows IoT Enterprise version. The others are really just embededded kernels without any real Win32 subsystem. Accordingly many standard Windows DLLs like kernel32.dll, user32.dll and many more are not present there. It is basically a Windows installation with a much smaller embedded kernel that only supports Universal apps from the app store. Accordingly a normal LabVIEW build application that you created on your dev machine should NOT even be able to be started as they are build for x86 and not ARM and require a fully functioning Win32 subsystem. How do you target your IoT system?
  21. There can't be as LabVIEW can't and never could load two VIs with the same name into the same application instance at the same time. So it will load the first VI, then attempt to load the second, seeing that it is already loaded, then compare the two. Obviously there isn't any difference between the two. While you could argue that the function could check the two VI names to be equal before attempting to do anything, you also have to consider that this function is not your average Joe toolkit. It is fairly advanced and there is something to say about not sanity checking every possible error on every VI boundary down several hiererachy levels. If the user of this function wants that check it's easily done before calling this function. If he forgets such trivialities he may not be the intended target audience for this function.
  22. That's because you guys call it decimal point, but in Germany one says "Komma" when you mean to devide the fractional part from the decimal part of a number. So the language is consistent there. (German does also know the word "Dezimalpunkt" but generally the comma is used). The introduction of the decimal comma started in the 18th century in central Europe, most likely influenced from the French. The English speaking countries continued to use the decimal point. Switzerland is special as one generally talks about a comma but uses a point, but it is not fully standardised in Switzerland (most probably due to the German influence since about half of Switzerland is German speaking). However in nowadays computer settings the standard decimal character in all significant OSes is the decimal point for at least the Swiss German locale. So broadly speaking the decimal point is used in most English speaking countries and the decimal comma is used in most non-English speaking European countries. While kind of confusing I find that more easily understandable than the myriads of date and time formats used ๐Ÿ˜€. Not to forget that with date and time you end up with other "trivialities" such as time zones, different calenders and even leap seconds and years.
  23. Michael, technically what you write is only true for an input array of exactly 5 elements. If you really always want to remove the last three elements the OP needs to use an Array Length node, subtract the number of elements he wants to remove from this length and wire that to the offset. In that case you can leave the size input unwired because it will default to the rest of the array.
  24. Well looks like my case but I'm not aware that I have Java Script disabled anywhere. And with another page (different number at the end) I could suddenly download everything independent on if I was logged in or not.
  25. Minimal extensions installed which should not affect this: Adobe Acrobat, Cisco Webex Extension, Google Analytics Opt-Out , Google Docs Offline, XML Viewer Chrome addons: Docs, Sheets, Slides Nothing else.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.