-
Posts
3,871 -
Joined
-
Last visited
-
Days Won
262
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Rolf Kalbermatter
-
Never having tried to look at the Project Provider Framework at all I can't really say for sure, but I would assume that in order to verify that a PPF is valid this check is done on every load, so is in the provided PPF base framework. With flarn having admitted to have broken password protection before it seems not so hard to guess how it all went. And yes, PPFs have the potential to wreck a LabVIEW installation completely and even worse modify code on the fly in a way that is very hard to detect. So this "signing" business is most likely much less about NI not wanting developers to be able to add plugins, but rather safeguarding those customers who have VERY stringent requirements about approved software running on their systems. They are out there and they have rules that even forbid to install OpenG VIs since they are not from an officially approved source.
-
The link gives an error and the main site is found suspicious by McAfee.
-
Can I download a "config file" from Git at run-time?
Rolf Kalbermatter replied to nitulandia's topic in Source Code Control
The easiest is most likely to use the command line tool of whatever GIT client you install. I do the same for SubVersion calling svn.exe with svn status --show-updates --verbose Parsing the return string is some work but easily doable in a generic manner that will work well. I'm absolutely sure that GIT works the same and this will give you a very flexible and easy to do interface without any need to go .Net, etc. Most of the tools you would otherwise integrate take the command line approach too in the end. -
VI Server is meant to work between LabVIEW versions and platforms transparently. There shouldn't really be anything that could break. Well there used to be properties such as for platform window handles that used to be 32 bit only until LV 2009. They are now depreciated but still are accessible and if you happen to use them you could run into difficulties when moving to 64 bit platforms and trying to access them, remotely or locally.
-
Some of these objects have existed in LabVIEW for a long time and never done much more than crashing it! I assume they are cruft left over from some experiments that either were abandoned at some point or the guy who sneaked them in had suddenly left and nobody ever noticed it. The whole LabVIEW code base is huge and no single person on this world has a complete overview of everything that is inside it.
-
No, private nodes are yellow LabVIEW nodes that are not available in the standard palette (but can be generated with your "Generate VI Object method"). The idea is that your external shared library somehow creates an object reference somehow (usually a pointer to whatever your shared library finds suitable to manage your object) and then this object reference needs to be assigned to the user tag reference. This can be either done on the LabVIEW diagram with such a node after the call to the shared library function to create the object reference or inside the shared library itself by calling undocumented LabVIEW manager functions. Consequently there are matching LabVIEW diagram nodes or manager API calls to deregister an object reference from a user tag. But again unless you intend to start writing shared libraries (C/C++ programming) to allow access to some sort of device, or other functionality, this really isn't interesting at all to you.
-
It's a generic user tag refnum. The functionality behind it is relying on information found in the resource/objmgr directory inside LabVIEW. Basically the rc files in there can define an object hierarchy and for each object type you can define methods, properties and events that map to specific exported functions from a shared library. Once the generic tag refnum has been selected to represent a specific object type from one of the object hierarchies it is not generic anymore and you can not select other object types anymore from other object hierarchies. Also flags in the object type inside the rc allow to specify if the user is allowed to even select any other object type within the object hierarchy. It's all pretty involved and complicated (a single error in the rc file usually makes it completely useless and you can go and start guessing were the error is. To interface in a shared library to a LabVIEW VI user tag refnum you also either need to use some private diagram nodes to register the session handle returned from your shared library to a user tag refnum and also one to deallocate it, or use internal LabVIEW manager functions to do that. But unless you write drivers for some kind of interface in external shared libraries, the user tag refnum has no practical meaning for you at all. And it requires your shared library to be explicitly written to work with LabVIEW, it's not a means to interface LabVIEW to standard shared libraries at all.
-
If it seems limited to your PC then the most likely suspect would seem the network card and according driver in that PC. Wouldn't be the first time that network card drivers do buggy things. Maybe it has to do with jumbo frame handling. Try to see if you can disable that in the driver configuration. As far as I know cRIO doesn't support jumbo frames at all, so there shouldn't be any jumbo frames transmitted, but it could be that an enabled jumbo frame handling in the driver does try to be too smart and reassemble multiple TCP packets into such frames to pass to the Windows socket interface.
-
It's pretty common to have the male connector on the base board and the female part on the daughter board. I definitely have seen mostly this setup (and yes I talk about professional products here) if they didn't use some other special connectors altogether. One of the reasons probably is that the male connectors cost less and have to be always soldered on the board, while the daughter board is only sometimes necessary. They are not alone, look at the Raspberry Pi for instance which uses a male connector too. Both solutions have their advantages and disadvantages. As far as connecting cables, if you don't use flatband ribbon cables with IDC connectors, the male connectors require more expensive test lead cable connectors but unlike when pushing normal wires into female connectors, they won't damage the contact spring in the female connector. EDIT: And another reason is that the according male IDC connectors for flatband ribbon cables used to be not only exessively expensive but also often very hard to find.
-
I took a short look at your solution. While I can't determine if you have included all requirement tags (something you should definitely attempt to do as much as possible) the overall architecture looks certainly good enough. There is always something to say about a cleaner and more structured architecture but considering the time constrains of the CLA exam, you have to compromise and faster is certainly better than neat and clean in this particular case, although without some architecture you would fail too. Did you do all this in 4 hours?? If so, I think you are set for the CLA exam. If you needed significantly more time than 4 hours you may want to exercise some of the subsystem template implementations a little more to get them in as short a time as possible. As a pointer you may decide to actually do even less coding (not much more than the actual case and loop structures) and put a bit more prosa text in there instead. It may not seem to make a big difference but it's usually faster to write a short text than finding the correct node in the palettes and placing it on the diagram and wiring it up, often having to write some kind of comment anyhow too. And if you are familiar with Quick Drop set aside two minutes at the start of the exam to configure it to your likes. Also take the time to recreate your personal configuration as much as possible on a new LabVIEW install, exercise where the settings are beforehand so you don't loose time when configuring your machine on which you do the exam. An unfamiliar palette view, or auto-wiring or auto-tool setting can be a major pain in the ass when you are scrambling for time to get your CLA done, although it may seem only fractions of seconds you will loose each time.
-
Definit Definitely! They have been there since a long time. One of the possible reasons for the threaded "Insane VI" error messages in earlier LabVIEW versions. Your disk just isn't as reliable as you would like it and LabVIEW using a binary format for older files is very sensitive to even the slightest error there.
-
If you are going to write a wrapper anyhow I would forget about the stupid cluster and make it something like: void LV_some_func(char* data, int length, long start)  declare the struct in your code and copy the elements into it then call the API function.  void LV_some_other_func(char* data, int *length, long *start)  Here get the struct with the API function and then copy the contents into the parameters. Make sure to pass in "length" the actually allocated buffer length in the caller (your LabVIEW diagram), and to not copy more than that into the buffer. Then update "length" and "start" and return to LabVIEW. Something along these lines± void LV_write_block(const char* data, int length, long start) { datablock block; block.no_of_bytes = length > 10240 ? 10240 : length; block.start = start; memcpy(block.data, data, block.no_of_bytes); write_block(block); } void LV_read_block(char* data, int *length, long *start) { datablock *block = read_block(); *length = *length > block->no_of_bytes ? block->no_of_bytes : length; *start = block->start; memcpy(data, block->data, *length); deallocate_block(block); // This depends how the block is allocated. // If it is a static memory then this would be not possible. } But as pointed out before, the struct containing fixed size arrays being passed by value is likely a problem when not using the same compiler for your wrapper DLL then what was used for your library DLL you want to interface. And just for your information, long is only a 32 bit integer in Windows, even when using 64 Bit Windows and LabVIEW. long is only 64 Bits in 64 Bit Linux (and similar unix environments)!!!
-
This prototype seems very awkward. Passing a cluster by value means usually that all the elements in the cluster are normally passed as individual parameters over the stack. BUT there are several issues with that! First the fixed size string obviously can't be passed over the stack as individual bytes as that would blow your stack immediately. So it's likely that the C compiler would pass it as C string pointer anyhow, but likely means it is just an assumption from my side, I don't really know, and likely different compilers might think different about how to do that, so you are dealing with an API that is most likely not even compiler independent in this way. But the more interesting thing is that the function seems to return a pointer to that structure. However since the structure was never passed into the function as such, the C compiler would have to allocate it somehow. And now you have another problem to know how it does it. Is it a static memory area that is reused with every function call, causing nasty multithreading issues, or is it mallocing the buffer relying on the caller to properly deallocate it after each call?? And if it is using malloc, which one does it use??? Basically this function signature causes so many questions, und ambiguous answers that most likely will depend on the used C compiler too, that it is simply a stupid idea to even do in C, lets not talk about trying to call it from anything else. The only way to answer these questions properly is to dissassemble the DLL and check these things out specifically in the assembly code and then hoping that the DLL developer won't release a new DLL version that is compiled with a different C compiler version. If you don't find this worrying enough, I'm not sure there is anything that would worry you in this world! Well I now see that it is your DLL!!!!! Forget it and change that signature immediately!!! 1) Pass this structure as a pointer parameter, then Adapt to Type will generate the correct parameter passing. 2) Do NOT use malloc in a function to return memory buffers to the caller UNLESS you also provide according exported functions to the caller to deallocate those buffers. The malloc/free used by your DLL is only guaranteed to be the same malloc/free used by the caller, if both are compiled with exactly the same C compiler and the same linker settings and those linker settings don't include the static C runtime library into you DLL. Something you generally NEVER can assume when using DLLs, and in your case anyhow not true, since your DLL is generated using gcc while LabVIEW is compiled using Visual C. And various LabVIEW versions are compiled using different Visual C versions, with each having a different C runtime library. But don't believe gcc is better in that respect!
-
Man you must have a lot of idle time. As far as documentation goes for the binary VI format, all I know is that it basically follows the Macintosh resource format from the good old Mac OS Classic times. That knowledge allows to identify and access the various resources in a VI but of course is just the container format not the understanding of the individual formats of each resource type. While some of them used to be fairly similar to classic Mac OS formats, others and that is the majority of the VI resources, are very LabVIEW specific and the LabVIEW developers more or less threw together whatever they felt was necessary into a C structure and then flatten that to the resource stream. Without access to the LabVIEW C++ source code it is basically impossible to decode these resources in a meaningful way and even more difficult to modify them and write them back into the binary VI structure. VI Explorer really doesn't seem possible to do without access to the actual LabVIEW source code, which would have to be gained either illegally or caused by a preachment of at least some non-disclosure agreement. Also many of the bigger resources are currently ZLIB compressed which adds an extra complication layer into this all. Personally I think your best (and almost only) bet to get at these informations is to apply as LabVIEW developer within NI, but expect them to require you do sign some NDAs before you start, and they will do a good screening and some of your posts in the past may be a bit of an obstacle for that.
-
Well the next step in debugging this, would be to enable another mode on the read side and see what data actually arrives. Then comparing that data with what you think you sent on the cRIO side.
-
I can't be the first one to have tried this. XD
Rolf Kalbermatter replied to Sparkette's topic in LabVIEW General
With your arguing about this being a failure of LabVIEW, all guns would come with a protection that they can't fire at all, so nobody can get harmed. -
You use buffered mode, This mode only returns: 1) once timeout has expired, without retrieving any data 2) whenever all the requested bytes have arrived Seems your communication somehow looses somewhere bytes. With buffered mode you return with a timeout and the data stays there, and then gets read and interpreted as anothere length code sending you even more into the offside.
-
I can't be the first one to have tried this. XD
Rolf Kalbermatter replied to Sparkette's topic in LabVIEW General
Same as JKSH. Just because I can make a super-mega-prontosaurus cluster (and have seen some people do that) it would never ever come into my mind to do that myself. Same about C++ object hierarchies that are cascaded over umtien levels! Your example simply points out that LabVIEWs datatype handling is not only highly recursive, but also able to handle that. As far as practical usage for such a beast, it ranges even lower than my super-mega-prontosaurus cluster mentioned earlier. -
Why is the Spartan 3E driver only allowed for educational use?
Rolf Kalbermatter replied to Sparkette's topic in Embedded
There are many possible reasons. Not all may be considered by NI but some of them for sure. Professional FPGA development tools are a pricy thing. LabVIEW is in there somewhere in the lower middle of the price range with other solutions from Cadence and similar being considerably more expensive. Also the FPGA compiler tools from Xilinx themselves as well as other FPGA manufacturers when bought for professional use have a pretty steep price tag. The sale of Spartan 3E tools has an entirely different meaning for Xilinx than NI. For Xilinx it is a means to get their chips used in more designs, for NI it is a means to get people distracted from buying cRIO and myRIO hardware. Even someone without a commercial background will be able to see the difference. You can't rationalize the decisions of a corporate company with your desire to get as many things as possible for as little money as possible. NI without doubt had to make a deal with Xilinx to be allowed to use their FPGA compiler tool chain within LabVIEW and even though Xilinx is of course interested to sell their chips in the end, they hardly will have presented their compiler tools, which represent a very major investment in terms of software developer time, for free to NI. So NI had to make a significant investment for the FPGA compiler integration into LabVIEW, both in terms of redistribution fees for the Xilinx compiler tool chain as well as the development work for the LabVIEW integration. Part of that cost get carried by the sales of the cRIO and other FPGA based hardware products from NI. When used with the Spartan 3E developer board there is absolutely no hardware sale involved for NI and you have pointed out yourself how there are tools out there to avoid even paying any LabVIEW fees to NI. So there is absolutely no interest for NI to support Spartan 3E and other non-NI hardware with their software tools outside of education. NI has a strong dedication to support educational institutions because some of the students may be working within NI over some time and others may be going to other employers who might be a potential customer for NI hardware in the future. Hobbyists as bad as that may sound, are much less likely to bring in future sales. They either don't work in an environment that is a potential customer for NI, don't have purchasing influence power, or if they work in a place that could be interesting for NI, they most likely have professional means to contact NI to get some loaner or other special deal for evaluation purposes. NI is not and most likely will never be in the market for hobbyist hardware. That market has a very low margin with very short product life cycle and hard to beat free software tools, although you have to accept that the quality of the software tools may at times be less than ideal and support for them may drop at the blink of an eye if the main developer finds another more interesting target. -
While that is true, the property and method menus do get rather messy and unstructured when this is enabled so for normal development work I definitely prefer this option to be disabled. YMMV if all you do with LabVIEW is digging for rusty nails and other attic relicts.
-
What do you call the not-top-level VIs?
Rolf Kalbermatter replied to Aristos Queue's topic in LabVIEW General
My logical understanding of these terms is quite specific. Top Level VIs are VIs that run as the top of a VI hierarchy. They can be the main VI that starts a LabVIEW application but just as much VIs that get loaded through VI server and run as independent "deamons" in parallel with the rest of the LabVIEW application. They don't have direct links to the rest of the application other than through classical inter-application communication (IAC) means like pipes, TCP/IP, or files and quite occasionally Intelligent Global Variables, which don't classify as inter-application communication since they only work within the same process, but the principle is very much like IAC. In fact the main VI in most of my applications (the one assigned in the Application Builder as startup VI is only a loader with splash screen that then loads the actual main VI as another top level VI and runs it, after which the loader terminates itself cleanly. SubVIs are anything else when called by a VI, either implicitly by being part of the diagram or explicitly through the use of Call By Reference. They can show their front panel if they implement some form of dialog or other form of user interface but usually don't do so. The new Asynchronous Call by Reference "can" sort of create something in between but in most cases is more used like a Call by Reference with simply a delayed synchronization for the termination of the subVI. -
How to Calculate HEX time string to Normal Time string (SYSTEM)
Rolf Kalbermatter replied to pravin's topic in User Interface
I see! You let your embedded devices have a normal 8 hour working day and interpret the number of working days as hours! -
Magical variant to data? Or bug?
Rolf Kalbermatter replied to John Lokanis's topic in LabVIEW General
This is not the same thing. Variant to Data can very well deal with arrays and even clusters as long as the data structure inside the variant is actually compatible. That doesn't even mean it needs to be exactly the same. LabVIEW will happily convert numerics and timestamps, etc inside the variant, into strings. It will only joke on fundamentally incompatible elements and some that are debatable if it should still attempt a conversion. On the other hand, the conversion from timestamp to string, or floating point to string for instance will use the platform specific formatting rules (system setting dependent). That is often not what one wants when it's meant for more than just display purposes. But LabVIEW hasn't a runtime mind reading interface (yet) . As to the original topic, I'm not sure I like it. The lazy dog in me says: sure it's fine! but the purist prefers explicit type definition for such things. Call Library Node is another function that also does attempt back propagation of types for "Adapt to Type" terminal, but this usually fails as soon as there is any structure border in between. And it can indeed cause nasty problems if one changes the datatype of a control downstream the wire without even a warning indication of a change to the Call Library Node configuration and suddenly the application crashes nastily. -
How to Calculate HEX time string to Normal Time string (SYSTEM)
Rolf Kalbermatter replied to pravin's topic in User Interface
Your Hex time is a so called timer tick time. Its counter starts at boot up of the system (or any other time your embedded system likes) and simply counts through, wrapping around after 0xFFFFFFFF (or 2^32 = ~4 billion) ms = ~ 4 million seconds = ~50 days. As such there is absolutely no good way to convert this into an absolute timestamp since its reference time (the 0 point) will be defined new every 50 days or so, or whenever you startup/reset your system. Your value indicates that your system was probably up and running for around 254688 seconds = ~3 days since last booted/reset. A day has 86400 seconds, so 254688 seconds certainly is more than 8.8 hours. -
They are part of the LabVIEW code base.