Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,903
  • Joined

  • Last visited

  • Days Won

    269

Everything posted by Rolf Kalbermatter

  1. The problem is that NI seems to get out of a lot of hardware in recent years. Most Vision hardware has been discontinued or at least abandoned (no new products released and technical support on the still sold products is definitely sub-par compared to what it used to be in the past). NI Motion is completely discontinued (which is a smaller loss as it had its problems and NI never was fully commited to compete against companies like MKS/Newport and similar in that area). NI DAQ hasn't the same focus it used to have. NI clearly has set its targets on other areas and for some part moved on for some time already. That may be good news for their stock holders, but not so great news for their existing user base.
  2. Not out of the box. 32 bit LabVIEW interfaces to 32 bit Python. So you would need to have some 32 bit Python to 64 bit Tensor Flow remoting interface. If 64 bit LabVIEW is a requirement the lowest possible version is 2009 on Windows and around 2014 on the Mac and Linux platforms.
  3. If you want to take the Python route then of course. As far as the Call Library Node is concerned there is virtually no difference since at least LabVIEW 2009 and even before that the only real difference from 8.0 on onwards to 2009 is the automatic support for 32bit and 64 bit DLL interfacing at least if it is about pointers being passed directly as parameters. Once you deal with pointers inside structures you have to either create a wrapper DLL anyhow or deal with conditional code compilation on the LabVIEW diagram for the different bitnesses.
  4. NXG is Windows only anyhow. Sure NI says that they have been keeping options open for other platforms with NXG and to some extend that must be true because they need to be able to keep building executables for at least the NI Linux RT targets, but I would say it is a safe bet that the significantly longer than expected development time of LabVIEW NXG is not caused mainly by these attempts on maintaining full multiplatform support but that there have most probably rather been decisions to take shortcuts that make a mulitplatform version mostly unfeasable. NI's bet is maybe that .Net Core will eventually get mature enough so they can port NXG to other .Net Core supported platforms easily if and only if the market should turn at some point and Windows is suddenly turning into a niche product 😀. Python may be an option for those people who have used Visual Basic in the past but I do not see how you can efficiently build Python applications that combine all kinds of high performance IO such as DAQ, Vision, Instrument Control and Graphical User Interfaces all in one. Sure for many bench test systems that is not really needed and a command line operated test application can work too, but that's not quite what most customers want. 😂 And don't tell me there are GUI libraries for Python. I know there are, and I know there are people who have created amazing looking apps that way, but when did you last venture into calling GDI functions to build a GUI? I tried to create GUIs in Java a few years ago and that was a pretty painful experience. The graphical editors available for that are limited and flaky at best. Most autocreated code from them has eventually to be modified and even partly rewritten to get a decent working GUI. I can't see the Python GUI frameworks offering really a better experience.
  5. Actually unixODB does exist but it's not trivial to find drivers for some database systems that will work for your unixODBC version. The Microsoft SQL Server drivers for unixODBC require a very specific version of unixODBC to work with. The bigger problem however is that pretty much all database access libraries that exist for LabVIEW do this either through the ADO/DAO Active X interface or the .Net Database API which in turn internally interfaces to ODBC for the drivers that are not native ADO/DAO or .Net. Both Active X and .Net are not available in LabVIEW on non-Windows systems. So all those Database Toolkits will do nothing for you on a Linux system even if you would get unixODBC setup correctly and working with the driver. Accessing the ODBC API through Call Library Nodes is doable and has been done by some people under Windows although nothing is publicly available that would be considerable as a full toolkit. What I have seen is a starting point but not a full featured toolkit and porting it to work with the unixODBC API would be another extra effort. The ODBC API is complex enough to make this a bit of a challenge also because some of the API interface is not exactly LabVIEW friendly.
  6. Linux ODBC or iODBC or unixODBC work in principle. The idea is not bad but hampered by the fact that you need a compiled version of a database ODBC driver for your platform. As that is not generally something all database providers are eager to do, it makes unixODBC less interesting overall. In cases where you have a native driver like FreeTDS I would usually consider that preferable to trying to get unixODBC to work. unixODBC is an extra complication on top of a driver and the ODBC manager implementation is pretty complex in order to provide version compatibility between both higher and lower version ODBC clients and higher and lower version drivers.. This makes setting up a unixODBC installation more cumbersome. On the upside is of course the advantage to only have to interface to one API and connect to different databases simply by reconfiguring the connection.
  7. The VI sources are currently still in 7.0. mainly because one of my Linux test systems has the 7.0 version available. (The other is 8.6). I used to have projects that needed 7.1 support for this library.
  8. The SQL Server protocol has been reverse engineered as it is based on the old Sybase Tabular Data Stream format that Sybase did document at some point in a public document. Microsoft licensed the Sybase code and eventually created the SQL Server product from that. The underlaying network protocol is still very much the original TDS format with some extra features sprinkled into it by Microsoft over the years. This is the MS document from the original Sybase 4.2 version of the TDS documentation. This is a more detailed documentation about the later Microsoft modifications to the protocol. The "official" open source implementation of the TDS protocol is done by the FreeTDS project. Their implementation works also for the more modern 7.x protocol versions used in recent SQL Server versions, but as anything that was reverse engineered (even with proper protocol documentation) there are some obscure features that might not work properly with the FreeTDS implementation. Compiling the FreeTDS sources for a cRIO shouldn't be that complicated. Reimplementing it all in LabVIEW on top of the TCP primitives is theoretically also doable but the encryption features of this protocol will be hard to implement properly in this way.
  9. The lvzip.so file on normal Linux should generally remain in the root folder of the lvzip package for source code distributions. That should make sure that the application builder will include the shared library into the application build in a support folder (usually called data) on all platforms where that matters. Doesn't that work for you? I don't really have experience with application builds on Linux. I only use that platform for testing in the LabVIEW IDE and only really have two LabVIEW versions available for that. If LabVIEW does not automatically include shared libraries into an executable build on Linux you should probably include it explicitly in the build specification as always included and make sure it is put in the root folder of the application or the support folder. It's not as simple as taking the lvzlib.so from File Group 6 for any Linux application. I was trying to find a way in VIPM to directly specify which file was for which bitness in a package but it doesn't support that on file group basis so far, only on whole package base. Whichever shared library you use from that File Group needs to be called lvzlib.so though to make it work with the VIs. Some of these restrictions were dictated by the fact that I needed to support pre LabVIEW 2009 versions and some of my non Windows test environments are still based on such versions so just blanket upgrade everything to 2009 or beyond to get rid of some of these restrictions is not a simple option.
  10. That's most likely heavily influenced by the fact that the original Concept VI image analysis library, which they acquired from Graftec I believe, had to create some sort of handle like object without the ability to hack into LabVIEW itself. Their solution was to create a cluster with an image name string and some extra attributes including the actual pointer. They made the cluster such that only the string was visible. To a casual user it looked like it was just the name of the image but internally it consisted of a lot more. The name was used to register each handle in an internal list and each image could also be retrieved with the name only from this list when the handle had been getting invalid somehow. In hindsight it was not the ideal choice but back then (early 1990ies) LabVIEW programming was also not quite at the standard level of today. Error Clusters were not yet used throughout everything and most functions including that image library only returned an integer that could indicate an error code. External code programming in LabVIEW was entirely CIN based and refnums like they exist now in abundance only existed for file IO, and network.
  11. Not really unless you want to build some very custom stuff into your executable itself. And you would need an executable that does not statically link to the OpenG ZIP library as at the time you start it up, it can not reference the shared library anywhere or the loading will fail and the executable is not executable. To me that sounds like a lot more trouble than it's worth and you have the same issue with NI Toolkits. If you make use of (for instance DAQmx functions) you have to make sure the DAQmx driver is installed onto the target before you can deploy an executable that makes use of it. The Pharlap ETS platform did include some means of deploying shared libraries directly together with VIs when you deployed the program from the project but that may have sounded like a cool thing initially when a Windows DLL was simply executable on Pharlap too, but nowadays most Windows DLLs won't load on Pharlap and have to be created specifically for the Pharlap ETS target. For non Windows like OSes like the NI Linux RT system, where the elf shared library is all but Greek for the Windows based LabVIEW environment, this is even more complicated.
  12. The OpenG package file is simply a ZIP file in disguise, with an ini style package file (called spec) in the root that describes where the different file groups should go to in your LabVIEW installation and with restrictions for what version and platform they apply for. If you have 7-Zip installed you can right click on a *.ogp file and select 7-Zip->Open Archive. Then look in the directories for "File Group 8" and in there is the ogsetup.exe file. This is an Inno Setup file that installs the necessary packages into the correct NI Shared location for RT packages. I choose to do it this way as the files have to be installed in a location that has only write access when the process is evelated and rather than requiring the user to restart VIPM explicitly as admin (and trying to guess the correct location to write the files to from within a post install hook VI), I created an Inno Setup installer for the necessary files with an embedded manifest that requests elevation authorization from the user. After that and provided you have full cRIO support for NI Max for your target installed on your machine, you should be able to select the package in the Custom Software Install from within NI Max. Basically I choose to only extract the ogsetup.exe file into a LabVIEW 32-bit installation, since this is the only way to program LabVIEW RT programs anyway. I figured that the chance that someone would want to install SW packages to a RT target from a computer that is not used to program that target too, would be a very unlikely situation.
  13. The difficulty is that it goes beyond the by value dataflow principle in LabVIEW. And there is no easy way to fix that. The reference nature for images is neccessary as otherwise you run out of memory very quickly when trying to process huge images. But it remains a strange duck in LabVIEW land and even I have sometimes to remind myself that IMAQ images work differently than standard LabVIEW data. You would think that the DVR might change that a little as it is in fact a similar concept but there you need a special node to access the internal data and that protects from most possible race conditions that might otherwise be not obvious. Not so for IMAQ images references.
  14. It's a named reference. You could think of it as a name and a pointer. There is an inherent mapping in it similar to other named references like VISA or DAQmx refnums. If you pass in a string, LabVIEW converts it automatically to a refnum by looking up the name in its internal registry for that refnum class. If you pass in a refnum whose internal "pointer" is a valid reference it will directly use that reference instead and save the intrinsic lookup in the refnum registry for the according class. (Class being here a refnum class not a LabVIEW class).
  15. Well I happen to have committed code in the last weeks to the lvZIP library including yesterday evening. Do I have to redo that to get it in the new GitHub stuff? And I'm still busy trying to add support for symlinks and utf8 filenames to the lvZIP library. The basic work to support this is mostly done. Just need to properly integrate it in the lvZIP library and then do some corner case testing on the non-mainstream platforms like Mac OSX, Pharlap and VxWorks. MacOSX because it is different to Unix in some ways and Pharlap and VxWorks since they do not support symlinks and multibyte encoding at all. My SF account name is labviewer.
  16. From the hardware manufacturer. They are responsible to support their product. Given their questionable license practice they may be out of business already or it could happen anytime in the future or they decided that the market wasn't good enough to pay for real development expenses and stopped supporting the product. Whatever, if they can't help you nobody else can. Developing such a product is for sure a serious investment but every company sooner or later learns that maintaining and supporting such a product in the long term costs even more in terms of resources and that is where things get usually abandoned after the initial excitement. The technology is complicated enough that they can't just throw the product on the market and hope for NI to carry the software development burden and cost. There are enough subtle ways to make the NI software NOT work seamless with such a product and that doesn't even need explicit intent.
  17. A Samsung EVO 960 has a maximum transfer speed of 1.5GB/s and that assumes that it uses NVE rather than SATA. With an EVO 970 you get close to 2GB/s. This are ideal rates and require that the PCI bus controller and disk have an optimal connection and the PCI bus controller has near perfect chipset drivers. The reality is generally somewhat below that and the software bindings inside user space are usually even less performant. Old SATA based SSDs max out at around 500MB/s and that is bus imposed. There is no way to go above that with SATA. The FPGA DMA tech is pretty impressive but I would be surprised if they can go beyond 1GB/s.
  18. The comment about being able to choose more than one selection is not true since it is a radio button list that resets any previous selection when you select something new. For me I selected my own (company framework) which can sometimes vary since there are customers who use their own framework too. But I also have used DCAF and similar systems which had a CVT backend for most of the data handling.
  19. I'm afraid the chance for that is very small. Maintaining a separate install is a lot of work and the Community Edition is a different installation than the standard LabVIEW installer. More importantly: There is no license manager for the Linux version. So there is no way to put up something like the yearly renewal request for activation of the Community Edition. Basically it would be way to easy to distribute the LabVIEW Community Edition for Linux by bad actors and with no way for NI to even know about where it is used. The yearly reactivation requirement for the Community Edition is the only way that allows NI to at least track the use of it in some ways and give potential abusers at least once a year a bad feeling.
  20. Unfortunately it does not show the definition of canfd_frame datatype which seems to be the one that is important here. But there is a chance that this is the actual problem. I would expect the 8 bytes in the cluster in your datatype to be the actual CAN data. In that case your cluster is missing the UINT64 timestamp;//us element and if you pass in a big number of frames as lengh parameter this of course will amount to 8 bytes in the message buffer missing for every message element and that on a total message length that is normally 24 byte! That adds up very quickly and will even cause for small number of messages a problem rather sooner than later. Also your CHANNEL_HANDLE and DEVICE_HANDLE are both a pointer so it would be more correct to declare them as Pointer Sized Integer and use a 64 bit Integer on the LabVIEW diagram for it. The way you do it know it will work in LabVIEW 32-bit but it will badly fail if you ever want to move to 64-bit LabVIEW and even if you say now that that won't happen because there is no 64-bit library available either, the day will come where your library provider will give you a library and after you complain that it doesn't load in your software, just will comment: "Who the hell is still using 32-bit software?"
  21. The LAVA palette itself only installs a Lava icon into the LabVIEW palettes. When you then install Lava libraries (possibly OpenG libraries), they should appear in there.
  22. No that information is generally only available to people outside of NI on a limited "needs to know" base and the decision for that is handled by AEs for simple issues or the product manager for the product in question for more involved issues.
  23. From the look of it I would guess a bug in your zlgcan_wrap.dll or one of the myriads of other DLLs it depends directly or indirectly. Nothing in the LabVIEW diagram looks suspicious from the little information (none) we got from you about this DLL interface! So what is the C declaration of this function and its datatypes and subtypes?
  24. This is a pretty old version. The newest (not yet released version) can be gotten from here for the moment.
  25. I believe you! 🙂 During testing of this release I came across a problem that first dumbfounded me. On most systems it seemed to work fine but when executing it in LabVIEW 7.1 on Windows it consistently crashed. The problem turned out to be memory alignment related. One of the data structures passed to the shared library happened to be 43 bytes long. Inside the shared library was however some assignment operator where an internal temperary variable of that structure located on the stack was first filled in and then assigned to the passed in variable. C does allow to assign whole structure variables by value and the compiler then generates code to copy the whole variable. Except that Visual C did not bother to make it exactly 43 bytes but simply copied 48 bytes which resulted in random trash from the stack being copied after the end of the variable. On most platforms LabVIEW seemed to align the parameters it was passing to the Call Library Node such that this extra buffer overwrite didn't collide with any of the other parameters , but in LabVIEW 7.1 it somehow always wanted to pack the paramers tightly so that this copying corrupted the buffer pointer passed to the next parameter of the function. This was normally supposed to be a NULL pointer but of course wasn't NULL after this assignment and then the shared library crashed. I'm pretty sure that this was also the reason why it normally would encounter trouble on 64-bit Linux. And no this problem did not exist in pre 4.1 versions. This particular structure got extended when I incorporated the latest minizip 1.2 sources from Nathan Moinvaziri to support 64-bit archive operation. Previous versions used the standard stock minizip 1.1 sources included in the zlib source distribution.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.