Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,837
  • Joined

  • Last visited

  • Days Won

    259

Everything posted by Rolf Kalbermatter

  1. From the hardware manufacturer. They are responsible to support their product. Given their questionable license practice they may be out of business already or it could happen anytime in the future or they decided that the market wasn't good enough to pay for real development expenses and stopped supporting the product. Whatever, if they can't help you nobody else can. Developing such a product is for sure a serious investment but every company sooner or later learns that maintaining and supporting such a product in the long term costs even more in terms of resources and that is where things get usually abandoned after the initial excitement. The technology is complicated enough that they can't just throw the product on the market and hope for NI to carry the software development burden and cost. There are enough subtle ways to make the NI software NOT work seamless with such a product and that doesn't even need explicit intent.
  2. A Samsung EVO 960 has a maximum transfer speed of 1.5GB/s and that assumes that it uses NVE rather than SATA. With an EVO 970 you get close to 2GB/s. This are ideal rates and require that the PCI bus controller and disk have an optimal connection and the PCI bus controller has near perfect chipset drivers. The reality is generally somewhat below that and the software bindings inside user space are usually even less performant. Old SATA based SSDs max out at around 500MB/s and that is bus imposed. There is no way to go above that with SATA. The FPGA DMA tech is pretty impressive but I would be surprised if they can go beyond 1GB/s.
  3. The comment about being able to choose more than one selection is not true since it is a radio button list that resets any previous selection when you select something new. For me I selected my own (company framework) which can sometimes vary since there are customers who use their own framework too. But I also have used DCAF and similar systems which had a CVT backend for most of the data handling.
  4. I'm afraid the chance for that is very small. Maintaining a separate install is a lot of work and the Community Edition is a different installation than the standard LabVIEW installer. More importantly: There is no license manager for the Linux version. So there is no way to put up something like the yearly renewal request for activation of the Community Edition. Basically it would be way to easy to distribute the LabVIEW Community Edition for Linux by bad actors and with no way for NI to even know about where it is used. The yearly reactivation requirement for the Community Edition is the only way that allows NI to at least track the use of it in some ways and give potential abusers at least once a year a bad feeling.
  5. Unfortunately it does not show the definition of canfd_frame datatype which seems to be the one that is important here. But there is a chance that this is the actual problem. I would expect the 8 bytes in the cluster in your datatype to be the actual CAN data. In that case your cluster is missing the UINT64 timestamp;//us element and if you pass in a big number of frames as lengh parameter this of course will amount to 8 bytes in the message buffer missing for every message element and that on a total message length that is normally 24 byte! That adds up very quickly and will even cause for small number of messages a problem rather sooner than later. Also your CHANNEL_HANDLE and DEVICE_HANDLE are both a pointer so it would be more correct to declare them as Pointer Sized Integer and use a 64 bit Integer on the LabVIEW diagram for it. The way you do it know it will work in LabVIEW 32-bit but it will badly fail if you ever want to move to 64-bit LabVIEW and even if you say now that that won't happen because there is no 64-bit library available either, the day will come where your library provider will give you a library and after you complain that it doesn't load in your software, just will comment: "Who the hell is still using 32-bit software?"
  6. The LAVA palette itself only installs a Lava icon into the LabVIEW palettes. When you then install Lava libraries (possibly OpenG libraries), they should appear in there.
  7. No that information is generally only available to people outside of NI on a limited "needs to know" base and the decision for that is handled by AEs for simple issues or the product manager for the product in question for more involved issues.
  8. From the look of it I would guess a bug in your zlgcan_wrap.dll or one of the myriads of other DLLs it depends directly or indirectly. Nothing in the LabVIEW diagram looks suspicious from the little information (none) we got from you about this DLL interface! So what is the C declaration of this function and its datatypes and subtypes?
  9. This is a pretty old version. The newest (not yet released version) can be gotten from here for the moment.
  10. I believe you! 🙂 During testing of this release I came across a problem that first dumbfounded me. On most systems it seemed to work fine but when executing it in LabVIEW 7.1 on Windows it consistently crashed. The problem turned out to be memory alignment related. One of the data structures passed to the shared library happened to be 43 bytes long. Inside the shared library was however some assignment operator where an internal temperary variable of that structure located on the stack was first filled in and then assigned to the passed in variable. C does allow to assign whole structure variables by value and the compiler then generates code to copy the whole variable. Except that Visual C did not bother to make it exactly 43 bytes but simply copied 48 bytes which resulted in random trash from the stack being copied after the end of the variable. On most platforms LabVIEW seemed to align the parameters it was passing to the Call Library Node such that this extra buffer overwrite didn't collide with any of the other parameters , but in LabVIEW 7.1 it somehow always wanted to pack the paramers tightly so that this copying corrupted the buffer pointer passed to the next parameter of the function. This was normally supposed to be a NULL pointer but of course wasn't NULL after this assignment and then the shared library crashed. I'm pretty sure that this was also the reason why it normally would encounter trouble on 64-bit Linux. And no this problem did not exist in pre 4.1 versions. This particular structure got extended when I incorporated the latest minizip 1.2 sources from Nathan Moinvaziri to support 64-bit archive operation. Previous versions used the standard stock minizip 1.1 sources included in the zlib source distribution.
  11. Right but it's a bit more complicated than that, because the same issue although less extreme exists on all other LabVIEW platforms. And there are various trouble to turn the underlaying file id into a LabVIEW refnum. On Linux LabVIEW uses the posix functions, and so does it on MacOSX 64-bit but not on 32-bit or at least not the versions I tested (it could have changed later but that would be potentially even worse). In addition while modern Linux platforms use UTF8 throughout, that was different before and still can be configured differently (although I'm hard pressured to imagine why someone would do something stupid like that). And to make matters worse there is no standardized way really to determine what codepage was used when a ZIP archive was created. There is a newer option to use UTF8 which is indicated by a flag in the file entry for each file, but if that flag is not set the entry is in whatever OEM codepage (not ANSI) the computer was using at the time the archive was created. And that could and for most problematic archives also will be a different codepage than on the computer on which you want to extract the files. It's a complicated problem and part of it is basically unsolvable.
  12. That is normal. The VIs are still in LabVIEW 7.0 format and the used File Dialog in that version did not have an error cluster. So when mutating it to 8.0 and higher LabVIEW will insert some compatiblity code on the fly. Hmmm, it works fine on my computer but it is VIPM 2014 and I'm not fully sure if I didn't do some registry fiddling in the past to fix something. I rather would like to avoid having to do a vi package which only can be installed with VIPM of the same version or newer.
  13. Here is a new version 4.2.0b1 for the ZIP library. I didn't test it in every LabVIEW version on every platform. What I did test was MacOSx 32-bit and 64-bit LabVIEW 2014, Windows 32-bit and 64-bit LabVIEW 7.0, 7.1, 8.6, 2009, 2016, 2018, Linux 32-bit 7.1 and 8.6, and NI Linux x86 LabVIEW 2016. Other realtime targets I hadn't handy at the moment. Support for Linux 64-bit and NI Linux RT ARM as well as VxWorks and Pharlap is contained. The realtime support will only get extracted when installing into LabVIEW for Windows 32-bit and through a seperate exe file that is invoked and will prompt for an administrative elevation of this installer. You then have to go into NI Max and into the Software part of your target and select to install additional components. In the list should be an OpenG ZIP Tools version 4.2.0 package visible. Select that to be installed on your target. There are still following problems that I haven't implemented/fixed yet: 1) archives that contain file names with other encodings than your platform code page will go certainly wrong. This is probably not solvable without doing absolutely every file IO operation in the shared library too, since the LabVIEW file IO functions don't support any other encodings in the path. 2) If you try to zip up directories containing soft/hard inks then the current implementation will compress the actual target file/directory into the archive instead of a link and expanding zip archives that contain such links will expand just a small text file continaing the link destination. This is something I'm looking into to solve in the next release by allowing optionally adding a special link entry into the archive and create such a link on the filesystem when extracting. This is mostly of concern on Linux and MacOSX. While Windows also allows for such links nowadays it is still quite an esoteric feature and user accessible support for it is minimal (you have to use the command line or install additional third party tools to create/modify such links). Hope to hear from other platforms and versions and how it goes there. Without some feedback I'm not going to create a release. oglib_lvzip-4.2.0b1-1.ogp
  14. Are you sure you installed the OpenG ZIP Library 4.1 on that RT target after you updated the LabVIEW version on it? Pending this issue I should be able to have a test package ready sometimes this week which is supposed to support Windows, MacOSX and Linux (all 32-bit and 64-bit), NI Linux RT (ARM and x64) ready. Support for the VxWorks and Pharlap RT targets will be available but not tested.
  15. So Microsoft tightened security once more and made Common data files not so common anymore that every user can access them? Good to know.
  16. Ohh well I missed the "snapshot" 😆 But I would have to agree with everything in your post. Matlab isn't going to make this any easier at all. 😆
  17. I'll try to take that into my testing, but need to still install LabVIEW 2019 for that (and cross my fingers that it won't damage older LabVIEW versions on my computer).
  18. Are you serious? Do you want to operate those cameras in 320 * 240 pixel mode? First do some basic calculations about data throughput before starting to ask such questions. Your USB bus certainly will start to hick up if you try to transfer that many cameras at full resolution simultaneously to your computer. It's likely to cause trouble even with a lot less of cameras. USB never was intended for so many high speed simultaneous devices and even with GigE ethernet you will be getting trouble if your cameras have even remotely modern resolutions and you want to do more than 1 or two frames per second per camera. And once you get all that data into your computer you will be hard pressed to do any significant image processing in realtime on it.
  19. Sounds suspicially llike a padding mistake. I see 38 or 39 plus a multiple of 48 in those numbers.
  20. It depends what you try to do. As long as you don't try to access a particular zip or unzip sessions from multiple places in parallel, the zlib zip library has always been safe for reentrant execution as it does not contain global state that spans across sessions. The underlaying zlib library (pure compression/decompression algorithms) itself is even less of a problem as it does not have the complex archive maintenance that is needed for a ZIP archive but only works on immediate memory streams. If you try to open a zip (or unzip) session somewhere and then branch this to two different locations and try to write streams to them from both locations you are going to crash sooner or later. Each session stores state that is used across method invocations and even if I would protect the individual function calls with a per session mutex (or make the functions all execute in the UI thread) you would still potentially corrupt the zip archive stream or in the case of unzip operation retrieve a different stream than what you think you do in a particular location. As long as you don't access a specific session (zip or unzip refnum) from multiple places in parallel you were always fine though and that will remain like this in the future. This pretty much is the same as trying to read or write to a specific file from multiple places (through he same refnum or a separately opened). You can do that but expecting the read and writes to work properly and have a proper data content in the file afterwards will be pretty much impossible. There is however no problem in writing (and/or reading) in parallel to two (or more) different files on disk. So setting the VIs to shared clone should work (all the state is stored in the session behind the refnum) but I'm not going to do that for now.
  21. I probably can't test the VxWorks targets for now but can create at least the RT image to be installed for them. No guarantee that it can even load though without having tested it before myself.
  22. Vision doesn't require realtime at all. What are you really trying to do here?
  23. It most definitely does with some caveats such as what hardware your computer may use. As to licensing, NI has so far mostly avoided the answer, but from the reaction so far it is clear that they don't feel compelled to create a standalone version of NI Linux for PCs. As far as licensing is concerned, the NI Linux part itself is a no brainer, it is Linux after all and you are always allowed to rebuild that for whatever hardware you want. The more interesting part is the NI-VISA, NI-this and NI-that software and of course the LabVIEW real-time engine that you also need to have installed on such a system to be useful for targeting with LabVIEW Realtime. This clearly is NI owned software and unless you have an explicitly spelled out license that allows you to use it on such a system, you are simply violating NI copyrights if you copy any of these files to a NI Linux operated platform of your own (aside that there are technical issues such as ABI compatibility and CPU architecture/family, for instance not every ARM CPU core is able to execute the LabVIEW ARM compiled modules, you need a Cortex A or compatible CPU core, this is the more powerful type compared to Cortex M or R which are meant for deeply embedded devices or reliable security platforms, or pre Cortex era cores).
  24. I did work on that mainly end of last year but found some time to resume testing recently. The code and VIs are more or less ready but I do need to do a bit more testing on Linux, Mac and the different real-time targets. Especially Mac and the real-time targets proof to be quite a hassle. Mac because I don't work often on it nowadays and the real-time targets because debugging shared libraries on them is always quite some hassle and each flavor is again different. I could however use some extra eyes for testing and I don't mean the shared library part itself but simply the general operation of it. I might be able to create a preliminary OGP package for installation through VIPM within a week or so if you want to test it. Let me know which platforms you would want to test it on and how and I will check what I can do.
  25. Why would you want to do that? Compiling a Linux image for embedded hardware is not a trivial feat. Yes NI does provide the scripts that are necessary to do that but you also need to have the perfectly setup development system with the right version of gcc toolchain to even hope to get that working seemlessly. Even slight version differences can mean that you might have to edit scripts and all and such edits are really not for the faint at heart. You need a very deep knowledge into Linux in general and especially embedded kernel compiling of Linux in order to hope that your edits will cause anything else than more errors. Googling with errors is in these cases usually also not a solution because you mostly find only answers from other noobs who have no idea what they are doing and just post random recommendations.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.