Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,903
  • Joined

  • Last visited

  • Days Won

    269

Everything posted by Rolf Kalbermatter

  1. There are many possible reasons. Not all may be considered by NI but some of them for sure. Professional FPGA development tools are a pricy thing. LabVIEW is in there somewhere in the lower middle of the price range with other solutions from Cadence and similar being considerably more expensive. Also the FPGA compiler tools from Xilinx themselves as well as other FPGA manufacturers when bought for professional use have a pretty steep price tag. The sale of Spartan 3E tools has an entirely different meaning for Xilinx than NI. For Xilinx it is a means to get their chips used in more designs, for NI it is a means to get people distracted from buying cRIO and myRIO hardware. Even someone without a commercial background will be able to see the difference. You can't rationalize the decisions of a corporate company with your desire to get as many things as possible for as little money as possible. NI without doubt had to make a deal with Xilinx to be allowed to use their FPGA compiler tool chain within LabVIEW and even though Xilinx is of course interested to sell their chips in the end, they hardly will have presented their compiler tools, which represent a very major investment in terms of software developer time, for free to NI. So NI had to make a significant investment for the FPGA compiler integration into LabVIEW, both in terms of redistribution fees for the Xilinx compiler tool chain as well as the development work for the LabVIEW integration. Part of that cost get carried by the sales of the cRIO and other FPGA based hardware products from NI. When used with the Spartan 3E developer board there is absolutely no hardware sale involved for NI and you have pointed out yourself how there are tools out there to avoid even paying any LabVIEW fees to NI. So there is absolutely no interest for NI to support Spartan 3E and other non-NI hardware with their software tools outside of education. NI has a strong dedication to support educational institutions because some of the students may be working within NI over some time and others may be going to other employers who might be a potential customer for NI hardware in the future. Hobbyists as bad as that may sound, are much less likely to bring in future sales. They either don't work in an environment that is a potential customer for NI, don't have purchasing influence power, or if they work in a place that could be interesting for NI, they most likely have professional means to contact NI to get some loaner or other special deal for evaluation purposes. NI is not and most likely will never be in the market for hobbyist hardware. That market has a very low margin with very short product life cycle and hard to beat free software tools, although you have to accept that the quality of the software tools may at times be less than ideal and support for them may drop at the blink of an eye if the main developer finds another more interesting target.
  2. While that is true, the property and method menus do get rather messy and unstructured when this is enabled so for normal development work I definitely prefer this option to be disabled. YMMV if all you do with LabVIEW is digging for rusty nails and other attic relicts.
  3. My logical understanding of these terms is quite specific. Top Level VIs are VIs that run as the top of a VI hierarchy. They can be the main VI that starts a LabVIEW application but just as much VIs that get loaded through VI server and run as independent "deamons" in parallel with the rest of the LabVIEW application. They don't have direct links to the rest of the application other than through classical inter-application communication (IAC) means like pipes, TCP/IP, or files and quite occasionally Intelligent Global Variables, which don't classify as inter-application communication since they only work within the same process, but the principle is very much like IAC. In fact the main VI in most of my applications (the one assigned in the Application Builder as startup VI is only a loader with splash screen that then loads the actual main VI as another top level VI and runs it, after which the loader terminates itself cleanly. SubVIs are anything else when called by a VI, either implicitly by being part of the diagram or explicitly through the use of Call By Reference. They can show their front panel if they implement some form of dialog or other form of user interface but usually don't do so. The new Asynchronous Call by Reference "can" sort of create something in between but in most cases is more used like a Call by Reference with simply a delayed synchronization for the termination of the subVI.
  4. I see! You let your embedded devices have a normal 8 hour working day and interpret the number of working days as hours!
  5. This is not the same thing. Variant to Data can very well deal with arrays and even clusters as long as the data structure inside the variant is actually compatible. That doesn't even mean it needs to be exactly the same. LabVIEW will happily convert numerics and timestamps, etc inside the variant, into strings. It will only joke on fundamentally incompatible elements and some that are debatable if it should still attempt a conversion. On the other hand, the conversion from timestamp to string, or floating point to string for instance will use the platform specific formatting rules (system setting dependent). That is often not what one wants when it's meant for more than just display purposes. But LabVIEW hasn't a runtime mind reading interface (yet) . As to the original topic, I'm not sure I like it. The lazy dog in me says: sure it's fine! but the purist prefers explicit type definition for such things. Call Library Node is another function that also does attempt back propagation of types for "Adapt to Type" terminal, but this usually fails as soon as there is any structure border in between. And it can indeed cause nasty problems if one changes the datatype of a control downstream the wire without even a warning indication of a change to the Call Library Node configuration and suddenly the application crashes nastily.
  6. Your Hex time is a so called timer tick time. Its counter starts at boot up of the system (or any other time your embedded system likes) and simply counts through, wrapping around after 0xFFFFFFFF (or 2^32 = ~4 billion) ms = ~ 4 million seconds = ~50 days. As such there is absolutely no good way to convert this into an absolute timestamp since its reference time (the 0 point) will be defined new every 50 days or so, or whenever you startup/reset your system. Your value indicates that your system was probably up and running for around 254688 seconds = ~3 days since last booted/reset. A day has 86400 seconds, so 254688 seconds certainly is more than 8.8 hours.
  7. They are part of the LabVIEW code base.
  8. Aside that it looks awful if you do not have a least a 60 inch screen, you never let the loop terminate programmatically. So to terminate your program you have to abort it. That does NEVER let the Close VISA function execute and your port stays open until you shutdown LabVIEW.
  9. No that is definitely some DCOM issue. It very much depends how you setup the configuration of the DCOM authorization. It's a pain to get it to work at all already and therefore I actually always avoid this. Usually create remote apps that get installed under a specific account as service and then connect to them over TCP. Initially more work to do, but eventually easier to manage, as you don't have to fight DCOM (and with some customers also their entire IT department).
  10. Hmm, you really just double clicked them??? With Vis from unknown source this is not very smart. You should always create an empty new VI and place an unknown VI on the diagram and then open it from there. These VIs are set to autostart when loaded as top level VI, and tinkered with so you can't set them to edit mode. If they contained bad code in the diagram, you would be hosed now!
  11. You are logged in on that remote computer with an unlocked user session? And connecting to the LabVIEW instance over DCOM under this account? Maybe DCOM executes the LabVIEW process under a different account that has no desktop session available!
  12. That's a dead end, unless you have VERY good dissassembly knowledge and endless time to tinker with. This DLL control is a real DLL. It is a C++ compiled plugin module that extends the internal object hierarchy by one additional class, each class being a specific LabVIEW control. The virtual table interface of such classes is VERY cumbersome to reverse engineer. Not impossible but REALLY VERY cumbersome. It's much easier to get an elephant to go through a needle hole than doing that! But it doesn't end there. You have to reverse engineer also the interface this class has to use to interact with the LabVIEW internal object model. This object model is loosely coupled with the object model as exposed by the LabVIEW VI Server interface but really only loosely. VI Server exposes some of the methods and properties of it, but by far not all, and is in fact a wrapper around the internal object model that translates between the C++ interface and the more strict LabVIEW VI Server interface. And my investigations into this make me conclude that it is not even fully extensible. The idea probably was to have a fully extensible interface but the controls who make use of this seem to rely on specific functions inside the LabVIEW kernel for itself, so unless one would only want to create slightly different type of existing controls by subclassing them it's probably not even going to work. This seems a very complicated business as there used to be another similar interface way back in LabVIEW 3. At that time the LabVIEW code was fully standard C only, but LabVIEW did anyhow use an internal object oriented UI control object hierarchy with messaging system and assembly written dynamic dispatch. A variant of a CIN code object file was used to add a new control into this control object hierarchy and the Picture Control was an addon package that seemed to use this interface, but in LabVIEW 4 the Picture control C code was fully integrated into LabVIEW itself and this object control interface was left in limbo only to be almost completely removed in LabVIEW 5, when a more extensive new object hierarchy was developed which went beyond just the UI elements and eventually used true C++ dynamic dispatch. The problem with this solution was that while LabVIEW evolved and had to add new object methods to its internal objects to implement new features such as Undo, the external code object created in an earlier version of LabVIEW and being embedded in the user LabVIEW VI, would still implement the older virtual dispatch table that did not have this method, so upgrading such LabVIEW code by the user to a new version would have been a rather problematic thing. Making this interface fully forward and backward compatible is an almost impossible task, but disallowing extension of the method table not an option either.
  13. If you buy LabVIEW now you will get LabVIEW 2014. LabVIEW 2014 can't directly read LabVIEW 4 VIs. Almost 20 years is a long period and they had to do some cut at some point. Here you can see the compatibility table between LabVIEW versions. You will need an intermediate conversion package, which you should be able to discuss with your NI sales representative. It's basically an Evaluation version of LabVIEW 8.2 or 8.5 that can load and recompile older VIs to a format that the most recent LabVIEW version can read.
  14. You seem to be using the VERY SUPER old original Database Toolkit with LabVIEW 4!!! You do realize that this software was released about 15 years before anyone even knew that Windows 7 might be released anytime soon???? Are you trying to load LabVIEW 4 on your Windows 7 machine? LabVIEW 4 was originally a Windows 3.1 application that used a special memory manager to support 32 bit operation. There was only a prelimenary version of it that was compiled for and working on Windows NT. While LabVIEW for Windows 3.1 could run in compatibility mode in Windows NT and even 2000, this compatibility mode was just a cludge and would badly fail as soon as you started to acceess hardware interfaces. That is not supported and will almost certainly fail in many different ways! I'm surprised that it even worked on XP even without hardware access but it's a safe bet that Windows 7 has probably broken something that would be necessary to run the Windows 3.1 version of LabVIEW on it. The way in LabVIEW 4 to interface to external code was through so called CINs. LabVIEW has changed a lot since those days and has subsequently removed the ability to create CINs in all versions of LabVIEW and eventually scraped support for loading CINs from all new platforms. This includes any new 64 Bit version of LabVIEW. Or are you using Windows 7 64 Bit and have installed a recent version of LabVIEW 64 Bit for Windows 64 Bit? As explained this version doesn't support CINs. The LabVIEW CINs are compiled binary resources similar to DLLs they have to match exactly the memory model of the LabVIEW application that you use. So there needed to be seperate CINs for LabVIEW for Windows 3.1, LabVIEW for Windows 32 bit, and about three versions of LabVIEW for Mac (Mac OS 9, Mac OS X PPC, Mac OS X x86) but there are none for LabVIEW for Windows 64 bit nor LabVIEW for Mac OS X 64 Bit and never will be. You may find the price of the toolkits expensive but unless you want to work with Windows XP or 2000 you really won't get around upgrading to newer versions. Besides working in LabVIEW 4 is a major pain in the a** in comparison with newer versions of LabVIEW. If you decide to upgrade at least LabVIEW, you can also look in alternative Database Toolkits on the net. There is one called LabXML and another one called ADO-Toolkit. There are others too but no matter which you choose none of them will support LabVIEW 4.0 so an upgrade of at least LabVIEW is unavoidable. Just FYI we are currently at LabVIEW 2014 which is about equivalent to LabVIEW version 14.0 in the old numbering scheme.
  15. That sounds like a bad memory! There is a threadconfig.vi in vi.lib\Utility\sysinfo.llb that will configure LabVIEW and write the necessary settings into the LabVIEW.ini file. You can then copy them into your application ini file. That should be managed by Windows.
  16. The first approach would certainly be to put your .Net code in a different execution system. That should give your timer code (it is all LabVIEW isn't it?) enough room to do its work. As LabVIEW executes a .Net call the calling thread is basically locked. If you happen to have 8 parallel .Net calls your execution system is blocked. Increasing the number of threads per execution systems is only a bandaid at best. Sooner or later you will manage to use them too and run into problems again. However do you really have that many parallel code execution paths going on that call into .Net? You only have .Net and no ActiveX and Call Library Node calls?
  17. And they may not just change to the better but introduce bugs in never versions, since they are not tested in the normal daily build tests and the internal tool that used them may have been removed or changed to use other functions. They also suddenly can be unimplemented, since as far as NI is concerned they never were documented so removing them is not something they would avoid at big costs when they clean up unused code. As said by AQ and others on various occasions a large internal project is currently to rewrite substantial parts of LabVIEW to get rid of old legacy code that doesn't match the current coding standards anymore and in the process of this it is likely that a lot of old undocumented features will get axed while they are at cleaning up the code, if they are found to be either incompatible with the new architecture or unused by any of the current internal LabVIEW tools.
  18. Yes it is, but as pointed out the protocol to download a hex file to the AVR controller seems to be quite involved. If it is fully documented by AVR you can of course try to implement it in LabVIEW but I would expect this to be non-trivial.
  19. It's discutable if this should work. But the quickest solution for this would be to allow for a small change in the "Write Key (Variant).vi" and "Read Key (Variant).vi" in the Cluster case similar to this:
  20. Unless you use a truely speced high speed flash pen drive you are likely to keep seeing poor performance. Especially those cheapo give away flash drives (I'm looking at you NI ) have abominable performance, especially for writing. I've found that almost all flash drives that are handed out for marketing purposes seem to be of that super el cheapo quality, and similarly most noname flash drives that you can find in the super market stores. Cost maybe half of a high quality brand drive but perform much less than half that good.
  21. You might be right there. I guess I was assuming that there must be a difference between an initialized object and one that hasn't been initialize yet since there is this Get LV Class Default Value.vi. But I guess any class has as soon as it gets placed on a diagram a cluster and the To More Specific Class simply compares that the underlaying object has a matching method AND cluster definition and in that case copies the class data into the existing cluster. Makes sense from a by val architectural point of view. As such there is indeed no Not an Object but only a Default Data state, which is the equivalent of the Class data cluster having all default data elements.
  22. I didn't mean to indicate that an object reference is a by-ref implementation. But I have a hunch that an object wire on the LabVIEW diagram is similar to other refnums. The use of a refnum doesn't have to mean that the object is by-ref. It's only a matter of how LabVIEW implements the rules to access the underlaying object that defines by-ref or by-value implementation. As such implementation of a "Not an Object" primitive would be not that difficult. But maybe they didn't do that so far as they might want to solve a few more issues with Not a Number/Path/Refnum to allow some distinction between the canonical invalid underlying object type for Refnums and Objects and the once allocated but later destroyed object type.
  23. It sure is known when the function returns. So you would need to determine its length after the function returned the array of strings. There is a C runtime function strlen() which does exactly that, and a LabVIEW Manager Function StrLen() which does the same. So calling StrLen() with the CLN just like you did with MoveBlock() on the string pointer will return you the number of characters in the string and then you can do an Initialize Array with that length and then a MoveBlock() to copy the string from the string pointer into the LabVIEW byte array and then convert it with Bytes to String into a String. There is another LabVIEW Manager function LStrPrintf() which combines those two into one convinient function call. The definition is: MgErr LStrPrintf(LStrHandle handle, CStr format, .....); You would configure the first parameter of the CLN as LabVIEW String handle passed by value, the second as C String pointer, and the third as Pointer sized integer. To the first you wire an empty LabVIEW string constant, to the second a string constant containing "%s" (without quotes) and the third is your C string pointer from your array. The output of the first parameter then contains the properly sized string. The return value of the function is an int32 and if not equal to zero indicates a LabVIEW error code.
  24. This is one topic, but there are more and I've seen some weird errors too especially with the conditional disable structure in the General Error Handler.
  25. That is only half the truth. For an unitialized object reference you are right but for an object reference that has been created and then destroyed the actual refnum value is not null but still not valid anymore! There should be something akin to "Not a Number/Path/Refnum" for object wires. In fact I was at first assuming that LabVIEW objects are inherently also refnums but that seems not really the case. And extending "Not a Number/Path/Refnum" to "Not a Number/Path/Refnum/Object" would seem logical but the resulting long name sounds like a bad idea.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.