Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation since 04/19/2022 in Posts

  1. LabVIEW never has been a money maker for NI directly. They were able to develop LabVIEW because of what they earned with their hardware sales. And LabVIEW was used to drive those hardware sales. A very successful model that drove others like HP Vee and such out of the market in the not very long term. Maybe HP/Agilent also was simply already to big for that market segment that they could possibly target with a product like this. The entire T&M component market isn't that huge. For HP it was what they had been starting with, but the big money was earned (and sometimes biggly lost) already in other areas. T&M was good for a steady revenue, but nothing that would stand out on the yearly shareholders report. It was unsexy for non-technicals and rather boring. That was one of the big reasons to separate HP into multiple companies. An attempt to create smaller entities that target a specific market segment and could be fine tuned in the sales and marketing efforts to the needs of that market. About 10 years ago NI reached the size where they started to feel the limitations of the T&M component market themselves. There simply was not a big enough market left that they could capture, to continue their standard double digits yearly sales grow for much longer. Some analysts were hired to look into this and their recommendations were pretty clear. Don't try to be the wholesale everything for all little parts manufacturer in T&M, but concentrate on specific areas where big corporations with huge production lines invest their test and measurement money. Their choice fell on semiconductor testing and more recently the EV market. It has a huge potential and rather than selling ten-thousends of DAQ boxes to hundreds of integrators, they now sell and deliver hundreds of fully assembled turnkey testers to those corporate users and earn with each of them more than they could ever earn with several 1000 DAQ boxes. What used to be NI's core business is nowadays a side line, at best a means to deliver some parts for those testers. But more and more a burden that costs a lot of money in comparison to the revenue it could even under ideal conditions generate. If you can understand this you also can guess where NI is heading. They won't die and their shares will likely not falter. But what they will be has little to do with what they used to be. If LabVIEW still has a place in this I do not know. Personally I think it would be better if it was under the parapluie of a completely separate entity than the new NI but I also have my doubts that that would have long term surviving chances. Earning enough money with a development environments itself is a feat that almost nobody has successfully managed for a longer period. But the sometimes heard request to Open Source LabVIEW has also not a lot of chances. It would likely cause a few people to take a peek at it and then quickly loose interest, since its code base is simply to complex. And there is also the problem that the current LabVIEW source code never could be open sourced as is. There are so many patent encumbered parts in it and 3rd party license dependencies, that nobody would be legally allowed to distribute even a single build of it without first hiring an entire law firm to settle those issues. While NI owns the rights for them or acquired a license to use them, many of these licenses do not give NI the right to simply let others use them as they wish. So open sourcing LabVIEW would be a fairly big investment in time and effort before it could be done. And who is willing to foot that bill?
    3 points
  2. I don't know much about CINs, CLN, or DLL calling in LabVIEW. But don't be discouraged. Our history makes us what we are. There's several projects I started and spent a lot of time on just to do nothing with, but I feel like I learned stuff or grew as a developer and a person just because of the journey. Maybe that time could be better spent elsewhere. But with that mentality I'd never do anything for entertainment.
    3 points
  3. Two suggestions: 1) Consider using JSON as your config-data format, rather than clusters. Using JSONtext to manipulate JSON will be faster than using OpenG tools to manipulate clusters. 2) Programmatically get an array of references to all your config-window controls and register a single Event case for Value Change of any one of them. Then use their (hidden) labels to encode what config item they set. For example, your control with the caption "Baud Rate" could have the hidden label "$.Serial Settings.Baud Rate" which is the JSONpath to set in your config JSON (or config clusters).
    2 points
  4. Pointers are pointers. If you use DSNewPtr() and DSDisposePtr() or malloc() and free() doesn't matter to much, as long as you stay consistent. A Pointer allocated with malloc() has to be deallocated with free(), a DSNewPtr() must be deallocated with DSDisposePtr() and a pointer allocated with HeapAlloc() must be deallocated with HeapFree(), etc. etc. They may in the end all come from the same heap (likely the Windows Heap) but you do not know and even if they do, the pointer itself may and often is different since each memory manager layer adds a little of its own layer to manage the pointers itself better. To make matters worse, if you resolve to use malloc() and free() you always have to do the according operations in the same compilation unit. Your DLL may be linked with gcc c-lib 6.4 and the calling application with MS C Runtime 14.0 and while both have a malloc() and free() function they absolutely and certainly will not operate on the same heap. Pointers are non-relocatable as far as LabVIEW is concerned and LabVIEW only uses them for clusters and internal data structures. All variable sized data on the diagram such as arrays and strings is ALWAYS allocated as handle. A handle is a pointer to a pointer and the first N int32 elements in the data buffer are the dimension size, followed directly with the data and possibly memory aligned if necessary, N being here the number of dimensions. Handles can be resized with DSSetHandleSize() or NumericArrayResize() but the size of the handle does not have to be the same as the size elements in the array that indicate how many elements the array hold. Obviously the handle must always be big enough to hold all the data, but if you change the size element in an array to indicate that it holds fewer elements than before, you do not necessarily have to resize the handle to that smaller size. Still if the change is big, you anyhow absolutely should do it but if you reduce the array by a few elements you can forgo the resize call. There is NO way to return pointers from your DLL and have LabVIEW use them as arrays or strings, NONE whatsoever! If you want to return such data to LabVIEW it has to be in a handle and that handle has to be allocated, resized, and deallocated with the LabVIEW memory manager functions. No exception, no passing along the start and collecting your start salary, nada niente nothing! If you do it this way, LabVIEW can directly use that handle as an array or string, but of course what you do in C in terms of the datatype in it and the according size element(s) in front of it must match exactly. LabVIEW absolutely trusts that a handle is constructed the way it wants it and makes painstakingly sure to always do it like that itself, so you better do so too. One speciality in that respect. LabVIEW does explicitly allow for a NULL handle. This is equivalent to an "empty" handle with the size elements set to 0. This is for performance reasons. There is little sense to invoke the memory manager and allocated a handle just to store in it that there is not data to access. So if you pass handle datatypes from your diagram to your C function, your C function should be prepared to deal with an incoming NULL handle. If you just blindly try to call DSSetHandleSize() on that handle it can crash as LabVIEW may have passed in a NULL handle rather than a valid empty handle. Personally I prefer to use NumericArrayResize() at all times as it deals with this speciality already properly and also accounts for the actual bytes needed to store the size elements as well as any platform specific alignment. A 1D array of 10 double values does require 84 bytes on Win32, but 88 bytes on Win64, since under Win64 the array data elements are aligned to their natural size of 8 bytes. When you use DSSetHandleSize() or DSNewHandle() you have to account for the int32 for the size element and the possible alignment yourself. If you use err = NumericArrayResize(fD, 1, (UHandle*)&handle, 10) You simple specify in its first parameter that it is an fD (floatDouble) data type array, there is 1 dimension, passing the handle by reference, and the number of array elements it should have. If the array was a NULL handle, the function allocates a new handle of the necessary size. If the handle was a valid handle instead, it will resize it to be big enough to hold the necessary data. You still have to fill in the actual size of the array after you copied the actual data into it, but at least the complications of calculating how big that handle should be is taken out of your hands. Of course you also always can go the traditional C way. The caller MUST allocate the memory buffer big enough for the callee to work with, pass its pointer down to the callee which then writes something into it and then after return, the data is in that buffer. The way that works in LabVIEW is that you MUST make sure to allocate the array or string prior to calling the function. InitializeArray is a good function for that, but you can also use the Minimum Size configuration in the Call Library Node for array and string parameters. LabVIEW allocates a handle but when you configure the parameter in the Call Library Node as a data pointer, LabVIEW will pass the pointer portion of that handle to the DLL. For the duration of that function, LabVIEW guarantees that that pointer stays put in place in memory, won't be reused anywhere else, moved, deallocated or anything else like that (unless you checked the constant checkbox in the Call Library Node for that parameter). In that case LabVIEW will use that as hint that it can pass the handle also to other functions in parallel that are also marked to not going to try to modify it. It has no way to prevent you from writing into that pointer anyhow in your C function but that is a clear violation of the contract you yourself set up when configuring the Call Library Node and telling LabVIEW that this parameter is constant. Once the function returns control to the LabVIEW diagram, that handle can get reused, resized, deallocated at absolutely any time and you should therefore NEVER EVER hold onto such a pointer beyond the time when you return control back to LabVIEW! That's pretty much it. Simple as that but most people fail it anyhow repeatedly.
    2 points
  5. Relevant slide from my Don't Wait for LabVIEW R&D... Implement Your Own LabVIEW Features! presentation:
    2 points
  6. You need to understand what Managed code means. In .Net that is a very clear and well defined term and has huge implications. LabVEW is a fully managed environment too and all the same basic rules apply. C on the other hand is completely unmanaged. Who owns a pointer and who is allowed to do anything with it, even reading from it, and when, is completely up to contracts that each API designer defines himself. And if you as caller don't adhere to that contract to the letter, no matter how brain damaged or undocumented it is, you are in VERY DEEEEEEEP trouble. LabVIEW (and .Net and (D)COM and others like it) all have a very well defined management contract. Well defined doesn't necessarily mean that it is simple to understand, or that there are lengthy documentations that detail everything about it in detail. Not even .Net has an exhaustive documentation. Much of it is based on some basic rules and a set of APIs to use to guarantee that the management of memory objects is fully consistent and protected throughout the lifetime of each of those objects. Mixing and matching those ideas between each environment is a guaranteed recipe for disaster. Not understanding them as you pass around data is that too! For other platforms such a Linux and MacOSX there also exist certain management rules and they are typically specific to the used API or group of API. For instance it makes a huge difference if you use old (and mostly depreciated) Carbon APIs or modern Cocoa APIs. They share some common concepts and some of its data types are even transferable between those two without invoking costly environmental conversions, but at that point stops the common base. Linux is according to its heritage a collection of very differing ideas and concepts. Each API tends to follow its own specific rules. Much of it is very logical, once you understand the principles of safe and managed memory. Until then it all looks like incomprehensible magic and you are much better off to stay away from trying to optimize memory copies and such things to squeeze out a little more performance. One of the strengths of LabVIEW is that it is very difficult to make code that crashes your program. That is until you venture into accessing external code. Once you do that your program is VERY likely to crash randomly or not so randomly, unless you fully understand all the implications and intricacies of working that way. The pointer from a LabVIEW array or string, passed to the Call Library Node, only is guaranteed to exist for the time your function runs. Once your function returns control back to LabVIEW it reserves the right to reallocate, resize, delete, or reuse that memory buffer for anything it deems necessary. This part is VERY important to allow LabVIEW to optimize memory copies of large buffers. If you want to have a buffer that you can control yourself you have to allocate it yourself explicitly and pass its reference around to wherever it is needed. But do not expect LabVIEW to deallocate it for you. As far as LabVIEW is concerned it does not know that that variable is a memory buffer, nor when it is not anymore needed or which heap management routines it should use to properly deallocate it. And don't expect it to be able to directly dereference the data in that buffer to display it in a graph for instance. As far as LabVIEW is concerned, that buffer is simply a scalar integer that is nothing more than a magic number that could mean how many kilometers the moon is away or how many seconds exist in the universes life, or how many atoms fit in a cup of tea or anything else you fancy. Or you pass the native LabVIEW buffer handle into the Call Library Node and use the LabVIEW memory manager functions if you have to resize or deallocate them. That way you can use LabVIEW buffers and adhere to the LabVIEW management contract. But it means that that part of your external code can only run when called from LabVIEW. Other environments do not know about these memory management functions and consequently can not provide compatible memory buffers to pass into your functions. And definitely don't ever store such handles somewhere in your external code to access them asynchronously from elsewhere once your function has returned control to LabVIEW. That handle is only guaranteed to exist for the duration of your function call as mentioned above. LabVIEW remains in control of it and will do with it whatever it pleases once you return control from your function call to the LabVIEW diagram. It could reuse it for something entirely different and your asynchronous access will destroy its contents or it could simply deallocate it and your asynchonous access will reach into nirvana and send your LabVIEW process into "An Access Violation has occurred in your program. Save any data you may need and restart the program! Do it now, don't wait and don't linger, your computer may start to blow up otherwise!" 😀 And yes, one more advice. Once you start to deal with external code anywhere and in anyway, don't come here or on the NI forum and ask why your program crashes or starts to behave very strange and if there was a known LabVIEW bug causing this. Chances are about 99.25678% that the reason for that behaviour is your external code or the interface you created for it with Call Library Nodes. If your external code tries to be fancy and deals with memory buffers, that chance increases with several magnitudes! So be warned! In that case you are doing something fundamentally wrong. Python is notoriously slow, due to its interpreted nature and the concept of everything is an object. There are no native arrays as this is represented as a list of objects. To get around that numpy uses wrapper objects around external managed memory buffers that allow consecutive representations of arrays in one single memory object and fast indexing into them. That allows numpy routines to be relatively fast when operating on arrays. Without that, any array like manipulation tends to be dog slow. LabVIEW is fully compiled and uses many optimizations that let it beat Python performance with hands tied on its back. If your code runs so much slower in LabVIEW, you have obviously done something wrong and not just tied its hands on its back but gagged and hogtied it too. Things that can cause this are for instance Build Array nodes inside large loops if we talk about LabVIEW diagram code and bad external code management if you pass large arrays between LabVIEW and your external code. But the experiments you show in your post may be interesting exercises but definitely go astray in trying to solve such issues.
    2 points
  7. Be afraid; be very afraid Generally, there is no concept of a pointer in LabVIEW. LabVIEW is a managed environment so it is more like .NET. You don't know here it is stored or even how much memory is used to store it. The CLFN will do that out-of-the-box .Yes. Because you don't know where it is for the lifetime of the variable.
    2 points
  8. I'd just pop up a dialog with a "Please contact our company's support channel and Idea Exchange forum to discuss or request implementation of this feature. Reminder: access to future improvements of our software is reserved to continuing subscribers. Other cheaper and more powerful alternatives may be available".
    2 points
  9. If you have an MNU inside a class (or library really) you can set it as the Default Menu for items in that class or library. Then if you right on members of the class, or the class constant, then the menu you specified will be available as a shortcut. Lets say I install a File IO package with a palette for it. And then I install as Zip package. I might want that zip package to insert items in the File IO palette, or I might want to make a subpalette for the Zip, or make a subpalette under Advanced for the the zip which is also in the File IO. In any of these cases you can then add the MNU to the zip class, and set it to the default. Now I can have the File IO palette come up if I right click a Zip class, or I can have it come up with the subpalette if that's what I did during the install. Here is some information on it. https://zone.ni.com/reference/en-XX/help/371361R-01/lvhowto/setting_default_palette_menus/ https://forums.ni.com/t5/LabVIEW/Here-s-a-scripting-VI-for-LVClass-default-menu/td-p/2429356 In the scenario I described you'd want to edit the Zip class, but you also will need to edit the File IO palettes, either adding Zip items, or adding subpalettes to it, or linking the Zip palettes to it some where. Doing this causes browse dialogs if say a required Zip DLL was missing and needed to be installed as part of the system. In that case I'd just include the DLL, but in my actual dependencies is XNet which can be a 1GB install so including that in the package isn't ideal.
    2 points
  10. Basically the point I was trying to make also in previous topics about CINs. 😀
    2 points
  11. The SQLite API for LabVIEW had a feature for that. Very easy with a database. I suppose you could do something similar just by saving and loading a particularly named JSON file.
    1 point
  12. I don't understand the connection. They were running it on a low power laptop. They were a student. They were (and continue to) be concerned with the climate. They were (and continue) to consider themselves poor. Not that it matters.
    1 point
  13. Generally if you use an external library to do something because that library does things that LabVIEW can't: Go your gang! If you try to do an external library to operate on multidimensional arrays and do things on them that LabVIEW has native functions for: You are totally and truly wasting your time. Your C compiled code may in some corner cases be a little faster, especially if you really know what you are doing on C level, and I do mean REALLY knowing not just hacking around until something works. So sit back relax and think where you need to pass data to your external Haibal library to do actual stuff and where you are simply wasting your time with premature optimization. So far your experiments look fine as a pure educational experiment in itself, but they serve very little purpose in trying to optimize something like interfacing to a massive numerical library like Haibal is supposed to get. What you need to do is to design the interfaces between your library and LabVIEW in a way to pass data around. And that works best by following as few rules as possible, but all of them VERY strictly. You can not change how LabVIEW memory management works Neither can you likely change how your external code wants his data buffers allocated and managed. There is almost always some impedance mismatch between those two for any but the most simple libraries. The LabVIEW Call Library Node allows you to support some common C scenarios in the form of data pointers. In addition it allows you to pass its native data to your C code, which every standard library out there simply has no idea what to do with. Here comes your wrapper shared library interface. It needs to manage this impedance mismatch in a way that is both logical throughout and still performant. Allocating pointers in your C code to pass back and forth across LabVIEW is a possibility but you want to avoid that as much as possible. This pointer is an anachronisme in terms of LabVIEW diagram code. It exposes internals of your library to the LabVIEW diagram and in that way makes access possible to the user of your library that 99% of your users have no business to do nor are they able to understand what they are doing. And no, saying don't do that usually only helps for those who are professionals in software development. All the others believe very quickly they know better and then the reports about your software misbehaving and being a piece of junk start pouring in.
    1 point
  14. I was wondering about that too. But then the scrollbars in the image he posted seem to indicate that that VI is actually properly inserted and the Insert VI method doesn't seem to return an error either. With the limited information that he tends to give and the limited LabVIEW knowledge he seems to have, it is all very difficult to debug remotely though. And it is not my job to do really. Edit: I'll be damned! A VI inserted into a Subpanel does not have a window handle at all. I thought I had tested that but somehow got apparently misled in some ways. LabVIEW seems to handle that all internally without using any Windows support for that. So back to the drawing board to make that not a Subpanel window but instead using real Windows Child window functionality. I don't like to use the main VIs front panel as the drawing canvas as the library would draw all over the front panel and fighting LabVIEWs control and indicator redraws. As to the NET_DVR_GetErrorMessage() call I overlooked that one. Good catch and totally unexpected! It seems that the GetLastError() call is redundant when calling this function as GetErrorMessage() is not just a function to translate an error code but really a full replacement for GetLastError(). Highly unusual to say the least but you get that for reading the documentation not to the last letter. 😆 It's hard to debug such a software without having any hardware to test with, so the whole library that I posted is in fact a dry exercise that never has run in any way as there is nothing it can really run with on my system. Same about the Callback code. I tested that it compiles (with my old but trusted VS2005 installation) but I can not test that it runs properly. Well I could but that would require to write even more C code to create a test harness that would simulate the Hikvision SDK functionality. I like to tinker with this kind of problems but everything has its limits when it is just a hack job in my free time.😀 Attached is a revisited version of the library with the error Vi fixed and it does not use a SubPanel for now but simply lets the Empty.vi stand on its own for the moment. Quick and dirty but we can worry about getting that properly embedded in the main VI after it has proven to work like this. HKNetSDK Interface.zip
    1 point
  15. There is also a presentation on youtube, which I found helpful: https://www.youtube.com/watch?v=xXGro_DylHs&ab_channel=LabVIEWArchitectsForum FWIW, a few lessons learned: The project provider VIs have to be installed in the LabVIEW resource folder. Distributing this through VIPM makes this super easy to specify this install location, and also turn on "require LabVIEW to restart" after install You have to restart LabVIEW to test code changes The code executes in a separate LabVIEW context. This can cause some weird behavior, I found that not all code runs in this context safely
    1 point
  16. Looks like The Pirate Bay is going to become NI support.
    1 point
  17. I think it's called project provider, there is a special interest group on NI Forums I've never fiddled with this but some have, here's a few other links : - https://forums.ni.com/t5/Developer-Center-Resources/Customize-the-LabVIEW-Project-Explorer-Using-the-Project/ta-p/3532774?profile.language=fr - https://forums.ni.com/t5/LabVIEW-Project-Providers/Project-Providers-Documentation/m-p/3492573#M285 -
    1 point
  18. 1) You don't, since it is not code. It is the function prototype (actually the function pointer declaration) of a function that YOU have to implement. And then you pass the name of that function as parameter to the other function that wants this callback function. Whenever that other function thinks it wants to tell YOU something it calls that callback function with the documented parameters and YOUR callback function implementation does then something with that data. But your function is called in the context of the other function at SOME time after you called the original function that you passed your callback function too. Are you still with me? If not, don't worry, most people have big trouble with that. If yes then you might have a chance to actually get this eventually solved. But don't expect to have this working tomorrow or next week. You have a steep learning curve in front of you. 2) The iCube Camera in is simply a LabVIEW class that handles the whole camera management in LabVIEW, and in some of its internal methods accesses the DLL interface, and creates the event message queue, and starts up an event handler, and ..., and ..., and .... 3) The RegEventCallback function is a LabVIEW node that you can use to register events on CERTAIN LabVIEW refnums. One of them are .Net refnums, IF the object class behind that refnum implements events. .Net events are the .Net way of doing callbacks. It is similarly complex to understand and implement but avoids a few of the more nasty problems of C callback pointers such as datatype safety. But to use that node you will need a .Net assembly that exposes some object class which supports some events of some sort. Since .Net uses typesafe interface descriptions, LabVIEW can determine the parameters that such an event has and create automatically a callback VI and connect it behind the scenes with the .Net event. It works fairly good but has a few drawbacks that can be painful during development. Once the callback VI has been registered and activated, it is locked inside LabVIEW and there are only two ways to get this VI back into an editable state. Restart LabVIEW or after the object classes on which the event occured have been properly closed (Close Reference node) you need to explicitly call the .Net Garbage Collector in LabVIEW to make .Net release the proxy caller that LabVIEW created and instantiated to translate between the .Net event and the LabVIEW callback VI. If you have a .Net assembly that exposes events for some of its object classes, it is usually quite a bit easier to interface from LabVIEW than trying to do callback pointers in a C(++) DLL/shared library. Writing an assembly in C# that implements events is also not really rocket science but definitely neither a beginners exercise. 4) If you interface to C(++) in LabVIEW there is no safety net, sturdy floor, soft cushions and a few trampolines to safe your ass from being hurt when something doesn't 100% match between what you told LabVIEW that the external code expects and what it really does expect. It's in the best case a hard crash with error message, the next best case is a hard crash with no error message and after that you are in the lands of maybes, good luck and sh*t storm. A memory corruption does not have to immediately crash your process, it could also simply overwrite your multimillion dollar experiment results without you noticing until a few hours later when the whole factory starts to blow up because it is operating on wrong data. So be warned, thread safely and make sure to have your C(++) solution tested by someone who really understands what the potential problems are, or don't use your code ever in a production environment. This is the biting in your ass that dadreamer talked about, and it is not really LabVIEW that did it, but you yourself! 5) Which video screen output are you talking about? Once you managed to get the camera data into your LabVIEW program without blowing up your lab? Well you could buy IMAQ Vision or start another project where you will need to learn a lot of new things to do it yourself. 🙂
    1 point
  19. There is a reason why so many pleas for support of camera access are out there and no single properly working solution except paid toolkits: Actually it's not one but a plethora of reason. - cameras use lots of different interfaces - they often claim to follow a certain standard but usually don't do so to the letter - there are about several dozen different communication standards that a camera manufacture can (try) to follow - it involves moving lots of data AFAP which requires good memory management from the end user application down to the camera interface through many layers of software often from different manufacturers - it costs time, time and even more time to develop - it is fairly complex and not many people have the required knowledge to do it beyond a "Look mom it keeps running without needing to hold its hands (most of the time)" Callback function are not really magic, but there is a reason they are only mentioned in the advanced section of all programming textbooks I know (if they are mentioned at all). Most people tend to have a real hard time to wrap their mind around them. It starts for many already with simple memory pointers but a call back function is simply a memory pointer on steroids. 😀 And just when you think you mastered them you will discover that you haven't really started, as concurrency and multithreading try to not only throw a wrench in your wheels but an entire steam roller.
    1 point
  20. I won't be there this year, and haven't heard anything about an official BBQ. Fingers crossed for a full normal NI Week next year.
    1 point
  21. I have no experience with these cameras or Hikvision SDK, but some things on your BD caught my eye immediately. Looks like the HCNetSDK.dll developer made all the functions to have stdcall calling convention, whereas your CLFN's use cdecl calling convention. You've set NET_DVR_Login_V30 CLFN to accept only 4 input parameters, but the function wants 5 parameters. You've set NET_DVR_Logout CLFN to accept 4 parameters, but the function needs only 1 parameter. In some CLFN's the parameter types don't match the prototypes exactly, e.g. wPort should be U16 (WORD), not U32 (DWORD). Use Windows Data Types table to find out, what WinAPI types represent. These are the prototypes for NET_DVR_Login_V30 and NET_DVR_Logout (as written in Device Network SDK Programming User Manual V4.2): LONG NET_DVR_Login_V30( char *sDVRIP, WORD wDVRPort, char *sUserName, char *sPassword, LPNET_DVR_DEVICEINFO_V30 lpDeviceInfo ) BOOL NET_DVR_Logout(LONG lUserID)
    1 point
  22. It's fine I moved it to a category I think fits. The Lounge would also work, which is a catch all. In the future feel free to use the Report to Moderator function giving text about what you want to have happen to a thread.
    1 point
  23. With packages you can include files (e.g. an installer), and put them where you want them. You can also call post-install scripts. I think if there is a way to call the installer silently from the CLI you could script this. You are starting to tread on IT's world though, but sometimes you need to get it done and for it to work so perhaps you are best off doing it yourself this way Seriously though if they have the systems in a domain or something they might be able to handle the environment setup independently of your NI packages.
    1 point
  24. That's almost like asking if you can install a GM engine into a Toyota. 😀 Answer is yes you can if you are able to rework the chassis, and make just about a few thousand other modifications. But don't expect support from either of the two if you ran into a snag. More seriously, you may also run into license issues.
    1 point
  25. You have some serious undefined behaviour in your c code. In create_copy_adress_Uint you dereference an uninitialized pointer, writing in a random location. In get_adress_Uint you return the address of a stack variable that is invalid as soon as the function returns. You are going to experience lots of crashing. Have you looked at the configuratrion options for the call library node? You can just pass parameters by pointer. Passing an array by "array data pointer" will let you manipulate the data as in C (but do not try to free that memory). You do not need to make a copy. Be mindful of the lifetime. That pointer is only valid during the function call and might be invalidated later. So don't keep it around after your function returns. If you also want to resize LabVIEW data structures, there are memory manager functions to do that. Pass the array by handle and use DSSetHandleSIze or NumericArrayResize. Examples for interfacing with DLLs are here: examples\Connectivity\Libraries and Executables\External Code (DLL) Execution.vi
    1 point
  26. Ehh why not... <gets chair and looks intensely at camera> I think that NI will sell in the next 2-3 years. I agree with X on the churn rate. There's zero chance NI comes out on top in the long term with this plan. NXG is dead; LabVIEW as a competitive language is no more from a professional standpoint. It's firmly an enthusiast language now. That means like other enthusiast languages it's user base will continue to shrink from here on out. Now you've got two options to deal with this problem; embrace it or hasten it's demise. NI is obviously going with the later. 2-3 (maybe 5?) years of increased revenue while people work their way off the LabVIEW bandwagon (which they were going to do anyways when NXG was nuked) and then they are moving on. It's possible NI just understands the 'make hay while the sun is shining' concept and are going to get every value out of the product in the next half decade because, either way, LabVIEW is dead weight on the company in 5-10 years. The other possibility is that subscription revenue has a higher impact on company value (on paper) than on-off sales. I think subs are a 2-3x multiplier on estimated value. If NI is looking to sell, moving everything to subs and holding for a couple years until they hit the peak of the revenue curve in 2025 and then shopping for a buyer makes the company look 50-100% more valuable than it was in 2021. All that's conjecture and theory. I'm more than happy to be proven incorrect, but I believe I am saying the quite part out loud here and I think that's a good thing. (I hope) Best Tim
    1 point
  27. You are playing with fire. Ownership is key. DO NOT manipulate pointers in LabVIEW-period! You either manipulate data by passing it to a DLL (like an array where LabVIEW owns the data) or you provide functions to manipulate data (where the DLL owns the data - where is your freeing of the pointer allocated inside the DLL?). LabVIEW has no ability to know what a DLL is doing with memory and vice versa. You must also take into account the pointer size. (32 bit LabVIEW or 64 bit LabVIEW). For some types, this is handled for you (arrays, for example) others you will want to use the Unsigned/Signed Pointer sized Variable (for opaque pointers) and pass that BY VALUE to other functions. Look at the Function Prototype in the dialogue. You will see the C equivalent of the call. Note that you do not seem to be able to do things things like int32_t myfunc(&val). Instead you have to use "Pointer to Value" and it will look like int32_t myfunc(int32_t *val). If you are trying to manipulate pointers, you are doing it wrong and it will crash at some point taking the whole IDE with it.
    1 point
  28. I have validated this library against LibreOffice 7.2 and Windows 10.
    1 point
  29. We are working on it. We just need more funding...
    1 point
  30. Actually, the nipkg.ini is located at the path specified in the KB (\%localappdata%\National Instruments\NI Package Manager\nipkg.ini.)
    1 point
  31. Hold my beer That's a great program, your son must be at least 8 or 9. If you did [insert some sage programming wisdom here], it might actually do what you think it does. Your program has a couple of bugs. Let's give it to someone that might be able to fix them. Hmmm. I've seen that architecture before. Isn't it called the spaghetti factory pattern? Don't worry. It's not that bad. I'll just put in a capital requisition for another 8 monitors to view the diagram.
    1 point
  32. Back in 2015 the LV profiler on Windows used an older, lower resolution time API that I believe could return timestamps that would result in negative deltas. Using unsigned arithmetic would result in gigantic time deltas like this. I believe it was LV 2017 or 2018 that was upgraded to use the QueryPerformanceCounter API for these timestamps, which has much better monotonicity and higher resolution. Rob Dye, NI LabVIEW team.
    1 point
  33. If you want to get some more background information (including pretty much useless historical details nowadays) you may also want to read through these blog posts: https://blog.kalbermatter.nl
    1 point
  34. If you want to build a product, even a free download, around the C Generator, think again! It is discontinued and will not come back ever and almost certainly doesn't work in the latest LabVIEW version without trouble. It can anyhow not officially get installed beyond LabVIEW 2017. The resulting code is, let's say "interesting", but trying to understand it is another matter. And if you end up trying to solve a problem with it, you are very likely simply ending up with at least one more problem than before, rather than one less! 😀
    1 point
  35. I had to do that once. Moved from LV to C++ A translation doesn't make sense as many concepts don't transfer well between languages, so it was rewritten from scratch. It's actually funny to see how simple things become complex and vice versa as you try to map your architecture to a different language. I assume by "done" you mean feature parity. We approached it as any other software project. First build a MVP and then iterate with new features and bug fixes. We also provided an upgrade path to ensure that customers could upgrade seamlessly (with minimal changes) and made feature parity a high priority. The end result is naturally less buggy as we didn't bother to rewrite bugs 😉 It certainly took a while to get all features implemented but reliability was never a concern. We made our test cases as thorough as possible to ensure that it performs at least as well as the previous software. There is no point in rewriting software if the end result is the same (or even worse) than its predecessor. That would just be a waste of money and developer time.
    1 point
  36. They are Code Interface Nodes. This is an obsolete and no longer supported technology, that was superseeded by Call Library Function Nodes. If you want to know more about CINs, take a look at Code Interface Reference Manual. You still can download C Code Generator package and install it onto LabVIEW (2017 is the latest version), but it's not actively maintained these days and I even suppose, that it was deprecated from LV 2020. If you want to dig this deeper, that thread may be useful for you.
    1 point
  37. For a regular library (.lvlib), opening a reference to it (via the 'Library.Open' method) will not bring member VIs into memory. For a class (.lvclass), opening a reference to it (via the Library.Open method) will bring all member VIs into memory. There's no way around this behavior that I know of.
    1 point
  38. It was me. And not partially. 64-bit CINs do work absolutely the same way as 32-bit CINs. I successfully managed to call all the entry points and later adapted three examples from Code Interface Reference Manual to 64-bit CINs. Moreover, I managed to make LabVIEW load external subroutines, that was impossible and unsupported in versions 8.0 to 2016. But in fact all this extra knowledge gave me nothing for my real work but wasted time and efforts. I stopped experimenting with CINs in around 2015 or so and never really came back to this legacy tech.
    1 point
  39. I have been a community admin for another Stack Exchange site, and the difference in tone between the various sites astounds me. I didn't understand how much rides on the tone the admins take. Stack Overflow, the main Exchange, is really bad to the point that I have shadow accounts to ask questions there because once you ask what is seen by some as a dumb question, they follow you to your next question with "you're still an idiot" type comments. It's massively helpful when it is helpful and deeply insulting when it is not. But it doesn't have to be that way -- it's the choice of the admins for that particular Exchange site. There's a difference between one or two obnoxious persons shining a flashlight in your eyes versus a group of people with laser pointers and magnesium flares. In my experience, Stack Overflow is the latter form of hell.
    1 point
  40. While that may be true to a greater or lesser extent. Stack Overflow will close the thread so you can't give more details, ask more probing questions or help them in any way. What it means is that Google search is littered with dead ends.
    1 point
  41. As if there were no such personalities on NI forums...
    1 point
  42. Some people would say that that is your problem. Others that it is a bliss. 😀
    1 point
  43. Hi Lavans I'm working on releasing our Medulla ViPER Dependency Injection Framework to the community as an open source project. ViPER has been a labor of love that I have been working on for close to 8 years. The motivation to develop ViPER was to reduce the cost, time and frustration involved in deploying test systems in highly regulated industries such as medical device manufacturing. The big problem that ViPER solves is that change does not require you to perform a full top to bottom verification of the system, only the new or changed component needs to be verified. We used ViPER at Cochlear to test implants and sound processors and is the standard architecture used within the enterprise. ViPER was also used to develop a system to parallel test up to 100 Trophon 2 units simultaneously for Nanosonics by implementing a Test Server running on an NI Industrial Controller. HMI Clients were implemented on tablets for operators, engineers and admins. Although ViPER is useful for test its not used just for test systems, you can build any system with ViPER. ViPER is a plugin architecture, it implements a recursive factory creator that injects pre-built (and verified) components into a system at runtime defined by a Object Definition Document. ViPER can build rich and deep object hierarchies, even inject into ancestors as well. Components include soft front panels and attribute and configuration viewer and are built on GDS4 class architecture. ViPER systems are also slim and efficient because they are not carrying around redundant classes in their builds that may or may not be needed. ViPER includes an Object Editor that allows you to create or edit the Object Definition Document but is also a useful engineering tool allowing you to navigate the object hierarchy, configure and launch Soft Front Panels for any sub objects. Included is a Project template that allows you to create your own ViPER Components. I presented ViPER the GLA Summit last year and to the Sydney LabVIEW User Group, I've posted the Video of the presentation on LinkedIn, I'm keen to find a few gurus to have play with it before I release it. ViPER: A Dependency Injection Framework for LabVIEW Cheers Kurt
    1 point
  44. I know this is quite old, but I recently sent a link to this when answering a question, and I realized I have added a couple of opcodes over the years that people might be interested in. Also Jing as a video recorder is old and I've been moving my videos away from it and hosting on Youtube instead. I've downloaded the two videos Norm and rehosted them there. Image Move.zip
    1 point
  45. Does it help to re-ask the question as "where should LabVIEW have a future?" It is not difficult to name a number of capabilities (some already stated here) that are extremely useful to anyone collecting or analyzing data that are either unique, or much simpler, using LabVIEW. They're often taken for granted and we forget how significant they are and how much power they unlock. For example (and others can add more): FPGA - much easier than any text-based FPGA programming, and so powerful to have deterministic computational access to the raw data stream Machine vision - especially combined with a card like the 1473R, though it's falling behind without CoaXPress Units - yes no-one uses them , but they can extend strict programming to validation of correct algorithm implementation Parallel and multi-threaded programming - is there any language as simple for constructing parallel code? Not to mention natural array computations Real-time programming Data-flow - a coherent way of treating data as the main object of interest, fundamental, yet a near-unique programming paradigm with many advantages and all integrated into a single programming environment where all the compilation and optimization is handled under the hood (with almost enough ability to tweak that) Unfortunately NI appear to be backing away from many of these strengths, and other features have been vastly overtaken (image processing has hardly been developed in the last 10 years, GUI design got sidetracked into NXG unfortunately). But the combination of low-level control in a very high-level language seems far too powerful and useful to have no future at all.
    1 point
  46. I did not know. That possibility was not even on my radar. Even though the drumbeat of bad news had been going for a while, most corporations refuse to change direction on a bad decision. NI showed more sentience than I usually expect from massed humans: the sunk cost fallacy is a trap that is very hard to get out of. I figured the very good engineers on NXG would either surge through it and make it fly or we would bankrupt the company trying. That's the pattern established by plenty of other companies. Mixed. I spent 4.5 years directly working on NXG (2011 to 2016) and countless hours in later years working with the NXG team to design a future G. I really wanted it to fly. There is so much good in that IDE, including some amazing things that I just don't see how we ever do in the LabVIEW codebase without just shattering compatibility. But at the same time, I was watching good friends toil on something that the market just wasn't adopting. The software had some problems that were going to take a long time to solve. The issues were all solvable, but the time needed to fix them... that was harder and harder to justify. NXG gave us a GREAT platform for other software: Veristand, FlexLogger, etc. That code is extremely modular and can be repurposed for all sorts of tools. We also learned a heck of a lot by building NXG -- some things that I thought we could never do in LabVIEW now seem possible. NXG gave us a sandbox to learn a whole lot about modern software engineering without putting the delivery schedule for mature software at risk, and those practices [have been|are being] brought back and applied to LabVIEW -- that will decrease cost of maintaining older code. All in all, NXG was valuable -- the expenditure was not a complete loss. I am very sorry to the few customers who did migrate to NXG. We don't have a reverse migration tool, and building one would be absurdly expensive. Leaving those folks stranded is going to hurt -- I hate letting our customers down and just saying, "We have no solution to help you." There aren't many of those folks (that's essentially the problem), but they do exist, and they are basically stuck having to rewrite their NXG apps in some other tool. I can only hope that they pick LabVIEW. I don't know if this will help us or hurt us with customers in the future... on one hand, people may say, "Well, you let us down on NXG, why should we trust you will be there for us on any new products?" On the other hand, this decision was clearly made listening to customer feedback, and it takes a lot of humility to swallow a loss that big, which may make customers trust our judgement more in the future. And, really, there's nothing to compare with the scale of NXG -- an entire computing platform -- so this does seem like something that needs to be judged in isolation. I really like programming in G. I like being able to expand G to make it more powerful. I wanted NXG to succeed because it had the potential to be a better G. It failed. Its failure means more resources for the existing LabVIEW platform, which will directly help our customers in the short run. It leaves open some big questions for the long run. So, in summary: I think it was a decision that had to be made, and I'm happy to work for a company that can learn from new data, then admit a mistake, and then figure out how to correct it.
    1 point
  47. I found this tonight while working on a project: https://remixicon.com/ Really good icon library with modern-looking icons where you can customize the color and size of the icons, then download them as PNG files. I then import them into a LabVIEW pict ring and it's off to the races.
    1 point
  48. Daklu, the style of my writing below is fairly terse and occassionally EMPHATIC. I've done this to emphasize key points that I think you've missed in How Things Work. Some customers in the past have felt I'm insulting them writing this way, but it is the only way I know through the limited text medium to highlight the key points. My only other option is to post just the key bits and leave out a lot of the exposition, but that doesn't seem to be as helpful when communicating. So, please, don't think I'm calling you dumb or being disdainful. I'm trying to teach. The problem is that you're almost right. Customers who are completely off-base are easier to teach because they need the whole lesson. Here, I'm just trying to call out key points, but presenting them in their full context to make sure it's clear what fits where. Throughout the post, refer to the graphic at the end of the post as it may clarify what I'm talking about. Dalku wrote: No. If there is, that's a bug that needs to be reported to NI ASAP. Think about what you just asked for... ignore the DVR part for a moment. You just asked for a PARENT object to invoke a CHILD class method. That cannot ever happen. You cannot pass a parent object directly to a function that takes a child object. The parent object in question IS NOT a child -- the parent object does not have the child's private data, nor does it have all the methods that may have been defined on the child class. For this reason, LV will break the wire if you try to wire a parent wire to a child terminal -- the wire is broken because there are zero situations in which this can successfully execute. You CAN pass a child object to a parent terminal. That is because a child IS an instance of the parent -- it has all the necessary data and methods defined to act as a parent object. What you can do is take a child wire, up cast it to a parent wire and make a Parent DVR out of that. Alternatively, you could take a child wire, make a Child DVR wire, and then upcast that to a Parent DVR wire... these two processes produce the idenitical result: a parent DVR that contains child data. Upcast and downcast DO NOT create new objects EXCEPT when they return an error. The point of a cast is to say, "I have an existing object. Please check that it is this type and approve it to go downstream if it passes this test." You use this only when you need to do something for a specific type of object and you do not have the ability to edit the parent and child classes in order to add the appropriate dynamic dispatch VIs to both. Preserve Run-time Class (PRTC) is the same thing. "Allow this object to pass downstream if it passes this test, otherwise create a new object that does pass the test." The test in this case is "Does the object in have the same TYPE AT RUN TIME as the OBJECT (not the wire) on the target object input?" If the left object is the same or a child class of the center object then there is no error. You will ALMOST NEVER WIRE PRTC WITH A CONSTANT FOR THE CENTER TERMINAL. I would say "never" because I can't think of any useful cases, but maybe someone has something out there. If you are wiring the center terminal of PRTC with a constant, something is wrong in your code. You use the PRTC to assert that the left object, which comes from some mystical source, is the right type to fulfill run-time type requirements of dynamic dispatch VIs, automatic downcast static VIs, and the Lock/Unlock of Data Value References. In all three of these cases, there is some input (either the input FPTerm or the left side of the Inplace Elt Struct) that must be passed across to the output (either the output FPTerm or the right side of the Inplace Elt Struct) WITHOUT PASSING THROUGH ANY FUNCTION THAT CHANGES THE OBJECT'S TYPE. You're free to change the object's value, but not its type. Sometimes you pass the object to functions where LV cannot prove that the type is maintained. Easy example -- pass the object into a Global VI and then read the Global VI. You'd never do this, of course, but it demonstrates the problem. LV cannot know that the object you read from the global is the same object you wrote in -- some other VI elsewhere might have written to the global in parallel. But you, as the programmer, know that there are no other writes to the global VI, so you use the PRTC to assert "this is going to be the right object type." You wire the original input (as described above) to the center terminal, and the output of the global to the left terminal, and pass the result to the original output terminal (as described above). Does that make sense?
    1 point
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.