Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation since 04/28/2022 in all areas

  1. June 3 will be my last working day at NI. After almost 22 years, I'm stepping away from the company. Why? I found a G programming job in a field I love. Starting June 20, I'm going to be working at SpaceX on ground control for Falcon and Dragon. This news went public with customers at NI Connect this week. I figured I should post to the wider LabVIEW community here on LAVA. I want to thank you all for being amazing customers and letting me participate vicariously in so many cool engineering projects over the years. I'm still going to be a part of the LabVIEW community, but I'm not going to be making quite such an impact on G users going forward... until the day that they start needing developers on Mars -- remote desktop with a multi-minute delay between mouse clicks is such a pain! 🙂
    9 points
  2. Nice! While you are there please convince Elon to buy NI and turn it back into an engineering company 🤣
    3 points
  3. LabVIEW never has been a money maker for NI directly. They were able to develop LabVIEW because of what they earned with their hardware sales. And LabVIEW was used to drive those hardware sales. A very successful model that drove others like HP Vee and such out of the market in the not very long term. Maybe HP/Agilent also was simply already to big for that market segment that they could possibly target with a product like this. The entire T&M component market isn't that huge. For HP it was what they had been starting with, but the big money was earned (and sometimes biggly lost) already in other areas. T&M was good for a steady revenue, but nothing that would stand out on the yearly shareholders report. It was unsexy for non-technicals and rather boring. That was one of the big reasons to separate HP into multiple companies. An attempt to create smaller entities that target a specific market segment and could be fine tuned in the sales and marketing efforts to the needs of that market. About 10 years ago NI reached the size where they started to feel the limitations of the T&M component market themselves. There simply was not a big enough market left that they could capture, to continue their standard double digits yearly sales grow for much longer. Some analysts were hired to look into this and their recommendations were pretty clear. Don't try to be the wholesale everything for all little parts manufacturer in T&M, but concentrate on specific areas where big corporations with huge production lines invest their test and measurement money. Their choice fell on semiconductor testing and more recently the EV market. It has a huge potential and rather than selling ten-thousends of DAQ boxes to hundreds of integrators, they now sell and deliver hundreds of fully assembled turnkey testers to those corporate users and earn with each of them more than they could ever earn with several 1000 DAQ boxes. What used to be NI's core business is nowadays a side line, at best a means to deliver some parts for those testers. But more and more a burden that costs a lot of money in comparison to the revenue it could even under ideal conditions generate. If you can understand this you also can guess where NI is heading. They won't die and their shares will likely not falter. But what they will be has little to do with what they used to be. If LabVIEW still has a place in this I do not know. Personally I think it would be better if it was under the parapluie of a completely separate entity than the new NI but I also have my doubts that that would have long term surviving chances. Earning enough money with a development environments itself is a feat that almost nobody has successfully managed for a longer period. But the sometimes heard request to Open Source LabVIEW has also not a lot of chances. It would likely cause a few people to take a peek at it and then quickly loose interest, since its code base is simply to complex. And there is also the problem that the current LabVIEW source code never could be open sourced as is. There are so many patent encumbered parts in it and 3rd party license dependencies, that nobody would be legally allowed to distribute even a single build of it without first hiring an entire law firm to settle those issues. While NI owns the rights for them or acquired a license to use them, many of these licenses do not give NI the right to simply let others use them as they wish. So open sourcing LabVIEW would be a fairly big investment in time and effort before it could be done. And who is willing to foot that bill?
    3 points
  4. I wanted to have a 2D drawing of my house's layout, so I could find have a clear picture of what outlets and lights were on what breakers. I ended up using PowerPoint because it had tons of shapes and was easy to use. I tried a couple of other 2D drawing things first but for a one off I figured it was easy enough. I showed it to a friend who was impressed that it was PowerPoint. Here it is, not quite finished. But this reminded me of a presentation I saw a couple years ago about how PowerPoint has lots of unused powerful features. It is an hour long, so maybe skip out on some in the middle. But experimenting with the morph features, 3D models, fractals, context aware designs, and using it for full screen programs are some of the topics. I especially love the library of 3D models.
    2 points
  5. 2 points
  6. sounds like they were too busy updating NI Logo and colors to implement VISA. Oh well.
    2 points
  7. You should be using $[*] or $[0] to indicate Array elements; $.[*] indicates all items in a JSON Object and $.[0] is the Object item named "0". Look at the detailed Help page for JSON Path notation in JSONtext.
    2 points
  8. The SQLite API for LabVIEW had a feature for that. Very easy with a database. I suppose you could do something similar just by saving and loading a particularly named JSON file.
    2 points
  9. Two suggestions: 1) Consider using JSON as your config-data format, rather than clusters. Using JSONtext to manipulate JSON will be faster than using OpenG tools to manipulate clusters. 2) Programmatically get an array of references to all your config-window controls and register a single Event case for Value Change of any one of them. Then use their (hidden) labels to encode what config item they set. For example, your control with the caption "Baud Rate" could have the hidden label "$.Serial Settings.Baud Rate" which is the JSONpath to set in your config JSON (or config clusters).
    2 points
  10. Pointers are pointers. If you use DSNewPtr() and DSDisposePtr() or malloc() and free() doesn't matter to much, as long as you stay consistent. A Pointer allocated with malloc() has to be deallocated with free(), a DSNewPtr() must be deallocated with DSDisposePtr() and a pointer allocated with HeapAlloc() must be deallocated with HeapFree(), etc. etc. They may in the end all come from the same heap (likely the Windows Heap) but you do not know and even if they do, the pointer itself may and often is different since each memory manager layer adds a little of its own layer to manage the pointers itself better. To make matters worse, if you resolve to use malloc() and free() you always have to do the according operations in the same compilation unit. Your DLL may be linked with gcc c-lib 6.4 and the calling application with MS C Runtime 14.0 and while both have a malloc() and free() function they absolutely and certainly will not operate on the same heap. Pointers are non-relocatable as far as LabVIEW is concerned and LabVIEW only uses them for clusters and internal data structures. All variable sized data on the diagram such as arrays and strings is ALWAYS allocated as handle. A handle is a pointer to a pointer and the first N int32 elements in the data buffer are the dimension size, followed directly with the data and possibly memory aligned if necessary, N being here the number of dimensions. Handles can be resized with DSSetHandleSize() or NumericArrayResize() but the size of the handle does not have to be the same as the size elements in the array that indicate how many elements the array hold. Obviously the handle must always be big enough to hold all the data, but if you change the size element in an array to indicate that it holds fewer elements than before, you do not necessarily have to resize the handle to that smaller size. Still if the change is big, you anyhow absolutely should do it but if you reduce the array by a few elements you can forgo the resize call. There is NO way to return pointers from your DLL and have LabVIEW use them as arrays or strings, NONE whatsoever! If you want to return such data to LabVIEW it has to be in a handle and that handle has to be allocated, resized, and deallocated with the LabVIEW memory manager functions. No exception, no passing along the start and collecting your start salary, nada niente nothing! If you do it this way, LabVIEW can directly use that handle as an array or string, but of course what you do in C in terms of the datatype in it and the according size element(s) in front of it must match exactly. LabVIEW absolutely trusts that a handle is constructed the way it wants it and makes painstakingly sure to always do it like that itself, so you better do so too. One speciality in that respect. LabVIEW does explicitly allow for a NULL handle. This is equivalent to an "empty" handle with the size elements set to 0. This is for performance reasons. There is little sense to invoke the memory manager and allocated a handle just to store in it that there is not data to access. So if you pass handle datatypes from your diagram to your C function, your C function should be prepared to deal with an incoming NULL handle. If you just blindly try to call DSSetHandleSize() on that handle it can crash as LabVIEW may have passed in a NULL handle rather than a valid empty handle. Personally I prefer to use NumericArrayResize() at all times as it deals with this speciality already properly and also accounts for the actual bytes needed to store the size elements as well as any platform specific alignment. A 1D array of 10 double values does require 84 bytes on Win32, but 88 bytes on Win64, since under Win64 the array data elements are aligned to their natural size of 8 bytes. When you use DSSetHandleSize() or DSNewHandle() you have to account for the int32 for the size element and the possible alignment yourself. If you use err = NumericArrayResize(fD, 1, (UHandle*)&handle, 10) You simple specify in its first parameter that it is an fD (floatDouble) data type array, there is 1 dimension, passing the handle by reference, and the number of array elements it should have. If the array was a NULL handle, the function allocates a new handle of the necessary size. If the handle was a valid handle instead, it will resize it to be big enough to hold the necessary data. You still have to fill in the actual size of the array after you copied the actual data into it, but at least the complications of calculating how big that handle should be is taken out of your hands. Of course you also always can go the traditional C way. The caller MUST allocate the memory buffer big enough for the callee to work with, pass its pointer down to the callee which then writes something into it and then after return, the data is in that buffer. The way that works in LabVIEW is that you MUST make sure to allocate the array or string prior to calling the function. InitializeArray is a good function for that, but you can also use the Minimum Size configuration in the Call Library Node for array and string parameters. LabVIEW allocates a handle but when you configure the parameter in the Call Library Node as a data pointer, LabVIEW will pass the pointer portion of that handle to the DLL. For the duration of that function, LabVIEW guarantees that that pointer stays put in place in memory, won't be reused anywhere else, moved, deallocated or anything else like that (unless you checked the constant checkbox in the Call Library Node for that parameter). In that case LabVIEW will use that as hint that it can pass the handle also to other functions in parallel that are also marked to not going to try to modify it. It has no way to prevent you from writing into that pointer anyhow in your C function but that is a clear violation of the contract you yourself set up when configuring the Call Library Node and telling LabVIEW that this parameter is constant. Once the function returns control to the LabVIEW diagram, that handle can get reused, resized, deallocated at absolutely any time and you should therefore NEVER EVER hold onto such a pointer beyond the time when you return control back to LabVIEW! That's pretty much it. Simple as that but most people fail it anyhow repeatedly.
    2 points
  11. Relevant slide from my Don't Wait for LabVIEW R&D... Implement Your Own LabVIEW Features! presentation:
    2 points
  12. You need to understand what Managed code means. In .Net that is a very clear and well defined term and has huge implications. LabVEW is a fully managed environment too and all the same basic rules apply. C on the other hand is completely unmanaged. Who owns a pointer and who is allowed to do anything with it, even reading from it, and when, is completely up to contracts that each API designer defines himself. And if you as caller don't adhere to that contract to the letter, no matter how brain damaged or undocumented it is, you are in VERY DEEEEEEEP trouble. LabVIEW (and .Net and (D)COM and others like it) all have a very well defined management contract. Well defined doesn't necessarily mean that it is simple to understand, or that there are lengthy documentations that detail everything about it in detail. Not even .Net has an exhaustive documentation. Much of it is based on some basic rules and a set of APIs to use to guarantee that the management of memory objects is fully consistent and protected throughout the lifetime of each of those objects. Mixing and matching those ideas between each environment is a guaranteed recipe for disaster. Not understanding them as you pass around data is that too! For other platforms such a Linux and MacOSX there also exist certain management rules and they are typically specific to the used API or group of API. For instance it makes a huge difference if you use old (and mostly depreciated) Carbon APIs or modern Cocoa APIs. They share some common concepts and some of its data types are even transferable between those two without invoking costly environmental conversions, but at that point stops the common base. Linux is according to its heritage a collection of very differing ideas and concepts. Each API tends to follow its own specific rules. Much of it is very logical, once you understand the principles of safe and managed memory. Until then it all looks like incomprehensible magic and you are much better off to stay away from trying to optimize memory copies and such things to squeeze out a little more performance. One of the strengths of LabVIEW is that it is very difficult to make code that crashes your program. That is until you venture into accessing external code. Once you do that your program is VERY likely to crash randomly or not so randomly, unless you fully understand all the implications and intricacies of working that way. The pointer from a LabVIEW array or string, passed to the Call Library Node, only is guaranteed to exist for the time your function runs. Once your function returns control back to LabVIEW it reserves the right to reallocate, resize, delete, or reuse that memory buffer for anything it deems necessary. This part is VERY important to allow LabVIEW to optimize memory copies of large buffers. If you want to have a buffer that you can control yourself you have to allocate it yourself explicitly and pass its reference around to wherever it is needed. But do not expect LabVIEW to deallocate it for you. As far as LabVIEW is concerned it does not know that that variable is a memory buffer, nor when it is not anymore needed or which heap management routines it should use to properly deallocate it. And don't expect it to be able to directly dereference the data in that buffer to display it in a graph for instance. As far as LabVIEW is concerned, that buffer is simply a scalar integer that is nothing more than a magic number that could mean how many kilometers the moon is away or how many seconds exist in the universes life, or how many atoms fit in a cup of tea or anything else you fancy. Or you pass the native LabVIEW buffer handle into the Call Library Node and use the LabVIEW memory manager functions if you have to resize or deallocate them. That way you can use LabVIEW buffers and adhere to the LabVIEW management contract. But it means that that part of your external code can only run when called from LabVIEW. Other environments do not know about these memory management functions and consequently can not provide compatible memory buffers to pass into your functions. And definitely don't ever store such handles somewhere in your external code to access them asynchronously from elsewhere once your function has returned control to LabVIEW. That handle is only guaranteed to exist for the duration of your function call as mentioned above. LabVIEW remains in control of it and will do with it whatever it pleases once you return control from your function call to the LabVIEW diagram. It could reuse it for something entirely different and your asynchronous access will destroy its contents or it could simply deallocate it and your asynchonous access will reach into nirvana and send your LabVIEW process into "An Access Violation has occurred in your program. Save any data you may need and restart the program! Do it now, don't wait and don't linger, your computer may start to blow up otherwise!" 😀 And yes, one more advice. Once you start to deal with external code anywhere and in anyway, don't come here or on the NI forum and ask why your program crashes or starts to behave very strange and if there was a known LabVIEW bug causing this. Chances are about 99.25678% that the reason for that behaviour is your external code or the interface you created for it with Call Library Nodes. If your external code tries to be fancy and deals with memory buffers, that chance increases with several magnitudes! So be warned! In that case you are doing something fundamentally wrong. Python is notoriously slow, due to its interpreted nature and the concept of everything is an object. There are no native arrays as this is represented as a list of objects. To get around that numpy uses wrapper objects around external managed memory buffers that allow consecutive representations of arrays in one single memory object and fast indexing into them. That allows numpy routines to be relatively fast when operating on arrays. Without that, any array like manipulation tends to be dog slow. LabVIEW is fully compiled and uses many optimizations that let it beat Python performance with hands tied on its back. If your code runs so much slower in LabVIEW, you have obviously done something wrong and not just tied its hands on its back but gagged and hogtied it too. Things that can cause this are for instance Build Array nodes inside large loops if we talk about LabVIEW diagram code and bad external code management if you pass large arrays between LabVIEW and your external code. But the experiments you show in your post may be interesting exercises but definitely go astray in trying to solve such issues.
    2 points
  13. Be afraid; be very afraid Generally, there is no concept of a pointer in LabVIEW. LabVIEW is a managed environment so it is more like .NET. You don't know here it is stored or even how much memory is used to store it. The CLFN will do that out-of-the-box .Yes. Because you don't know where it is for the lifetime of the variable.
    2 points
  14. I'd just pop up a dialog with a "Please contact our company's support channel and Idea Exchange forum to discuss or request implementation of this feature. Reminder: access to future improvements of our software is reserved to continuing subscribers. Other cheaper and more powerful alternatives may be available".
    2 points
  15. Fun! Good luck in your new endeavor. But the LabVIEW development team loses a very valuable and important member for sure.
    1 point
  16. Same as LabVIEW's zoom feature
    1 point
  17. Ah, it's because I added submenus in last release, and this added '_1' onto teh menu name. I'll fix that.
    1 point
  18. I don't understand the connection. They were running it on a low power laptop. They were a student. They were (and continue to) be concerned with the climate. They were (and continue) to consider themselves poor. Not that it matters.
    1 point
  19. Generally if you use an external library to do something because that library does things that LabVIEW can't: Go your gang! If you try to do an external library to operate on multidimensional arrays and do things on them that LabVIEW has native functions for: You are totally and truly wasting your time. Your C compiled code may in some corner cases be a little faster, especially if you really know what you are doing on C level, and I do mean REALLY knowing not just hacking around until something works. So sit back relax and think where you need to pass data to your external Haibal library to do actual stuff and where you are simply wasting your time with premature optimization. So far your experiments look fine as a pure educational experiment in itself, but they serve very little purpose in trying to optimize something like interfacing to a massive numerical library like Haibal is supposed to get. What you need to do is to design the interfaces between your library and LabVIEW in a way to pass data around. And that works best by following as few rules as possible, but all of them VERY strictly. You can not change how LabVIEW memory management works Neither can you likely change how your external code wants his data buffers allocated and managed. There is almost always some impedance mismatch between those two for any but the most simple libraries. The LabVIEW Call Library Node allows you to support some common C scenarios in the form of data pointers. In addition it allows you to pass its native data to your C code, which every standard library out there simply has no idea what to do with. Here comes your wrapper shared library interface. It needs to manage this impedance mismatch in a way that is both logical throughout and still performant. Allocating pointers in your C code to pass back and forth across LabVIEW is a possibility but you want to avoid that as much as possible. This pointer is an anachronisme in terms of LabVIEW diagram code. It exposes internals of your library to the LabVIEW diagram and in that way makes access possible to the user of your library that 99% of your users have no business to do nor are they able to understand what they are doing. And no, saying don't do that usually only helps for those who are professionals in software development. All the others believe very quickly they know better and then the reports about your software misbehaving and being a piece of junk start pouring in.
    1 point
  20. I was wondering about that too. But then the scrollbars in the image he posted seem to indicate that that VI is actually properly inserted and the Insert VI method doesn't seem to return an error either. With the limited information that he tends to give and the limited LabVIEW knowledge he seems to have, it is all very difficult to debug remotely though. And it is not my job to do really. Edit: I'll be damned! A VI inserted into a Subpanel does not have a window handle at all. I thought I had tested that but somehow got apparently misled in some ways. LabVIEW seems to handle that all internally without using any Windows support for that. So back to the drawing board to make that not a Subpanel window but instead using real Windows Child window functionality. I don't like to use the main VIs front panel as the drawing canvas as the library would draw all over the front panel and fighting LabVIEWs control and indicator redraws. As to the NET_DVR_GetErrorMessage() call I overlooked that one. Good catch and totally unexpected! It seems that the GetLastError() call is redundant when calling this function as GetErrorMessage() is not just a function to translate an error code but really a full replacement for GetLastError(). Highly unusual to say the least but you get that for reading the documentation not to the last letter. 😆 It's hard to debug such a software without having any hardware to test with, so the whole library that I posted is in fact a dry exercise that never has run in any way as there is nothing it can really run with on my system. Same about the Callback code. I tested that it compiles (with my old but trusted VS2005 installation) but I can not test that it runs properly. Well I could but that would require to write even more C code to create a test harness that would simulate the Hikvision SDK functionality. I like to tinker with this kind of problems but everything has its limits when it is just a hack job in my free time.😀 Attached is a revisited version of the library with the error Vi fixed and it does not use a SubPanel for now but simply lets the Empty.vi stand on its own for the moment. Quick and dirty but we can worry about getting that properly embedded in the main VI after it has proven to work like this. HKNetSDK Interface.zip
    1 point
  21. No no. That is only one part of the problem and not the biggest one. The biggest problem is providing a callback function pointer that is matching exactly what the library expects. If you compile a C source code that implements that function then you can let the C compiler do all that work, but I was under the impression that you were looking for a solution that avoids the need to have the end user of the solution to compile a C code file that has to be also specifically adapted to the callback interface that library expects. You need to have a memory pointer that can be used as function pointer and that provides a stack frame that is exactly compatible to the expected interface from the library you want to pass this pointer to. On the other side you need an interface that can adapt to the data type that your PostLVUserEvent() needs. By fixing this PostLVUserEvent() interface to one or two datatypes, you solve one of the problems but you still need to provide an adaptable function pointer that can match the function interface of the library for that callback pointer. And that is were C itself simply won't help you. Why should it try to do that? The C compiler is much better at figuring this out so why spend time to develop a standard that lets you do C compiling at runtime? Yes such libraries exist but that is not something a C compiler would ever consider supporting itself. Therefore variadic and pack extension were never developed for such use cases. and simply couldn't help with that. If you don't want to have a C compiler involved in the end user solution, you need a way to express your callback function interface in some strict syntax and a translation layer that can comprehend this interface description and then prepare a call stack frame and function prolog and epilog to refer to that stack frame and at the end clear up the stack frame properly too. That's the really hard part of your proposed solution. And the user of such a solution also needs to understand the description syntax to have your library build the correct stack frame and function epilog and prolog for that function pointer. Then you could have it call into a common function that receives the stack frame pointer and stack parameter description and translate that into whatever you need. This translation from anything to something else that LVPostUserEvent() needs, can be as elaborate as you want. It's not really very difficult to do (if you really know C programming and twiddling with bits and bytes efficiently of course), just a lot of work as you need to be prepared to handle a lot of data types on either side of that translation. You can reduce that complexity by specifying that the LVPostUserEvent() interface always has to be one and only one specific datatype but it is still a lot of work. This second part of translating a known stack frame of datatypes is in complexity similar to those LabVEW libraries out there that parse a variant and then try to convert it back into LabVIEW native data themselves. It's complicated, a lot of work and I seldom saw a library that did a really decent job at that beyond serving some basic datatypes but it can be done with enough determination. The REAL problem is however to let the user build a callback pointer that matches the expected calling interface perfectly without having him to execute a C compiler. This is only really possible with a library like libffi or similar, if you do not intend to go and start playing assembler yourself. There is no compiler support for this (unless maybe if you would like to repurpose llvm into your own library) since it makes little sense to let the compiler externalize its entire logic into a user library. The gcc developers have no interest to let a user create another gcc like thing, not because they shy the competition but because the effort would be enormous and they have enough work to churn on to make the compiler work well and adapt to ever increasing standard proposals. libffi allows to build callback pointers, but it needs of course a stack frame description and other attributes such as calling convention to be able to do so. And it needs a user function pointer to which it can forward control after it has unraveled the stack and which after this function has done its work it will execute its epilog to do any stack cleanup that may be required. If LabVIEW had an officially documented way to call VIs from exernal code, I would have long ago tried to do something like that, as it would be very handy to have in Lua for LabVIEW. Currently Lua for LabVIEW does all kinds of very involved gymnastics that also involve a background LabVIEW VI deamon to which the shared library passes any requests from the Lua code to execute VIs (well the VIs are really registered in Lua as simply other Lua functions so the Lua script has no idea that it is effectively calling a LabVIEW VI), which in turn then calls that VI from its LabVIEW context and passes any return values back. Since this is such a roundabout way of doing things it has to limit the possibilities. A LabVIEW VI can execute Lua code which calls back into LabVIEW VIs but after that it gets to complicated to manage the necessary stack frames across calling environments and the Lua for LabVIEW shared library has specific measures to detect such calls and simply forbids them categorically for any further round turn. It also has a very extensive function to translate Lua data to LabVIEW data based on the LabVIEW typedescriptor and an according reverse function too that handles almost all the data types that LabVIEW and Lua know. But I can't directly invoke a VI from within the external code since there is no documented function to do so. Yes I know that there are functions that can do it but without a full documentation about them I will not embark on using them in any project that will leave my little office ever. Basically your desired solution has two distinct areas that need a solution: 1) Create a function pointer with correct prolog and epilog according to some user specified interface description that will then call the function in 2) 2) Create a function that receives the stack frame information and datatype description and then uses another user specification that defines which of those stack frame parameters should be used and how and in what kind of datatype they should be translated to pass to PostLVUserEvent() 1) can be solved by using libffi, but it is not going to be easy. libffi is anything but trivial to use but then it does solve a problem that is also anything but common in programming. 2) is simply a more or less complex function that can be developed and tested independently from the rest. It is NOT the big problem here, just quite a bit of work. If the callback allows to pass a user data pointer, you can repurpose that to pass all the information from about how the stack frame is build and how to translate from the stack frame to the PostLVUserEvent() from the setup function that prepares the callback pointer through this pointer. If it has not such a user data pointer , you have an additional problem of how to provide the necessary information to your translation function. It may be possible to prepare the callback pointer with some extra memory area to hold that pointer such as prepend it directly in front of the actual entry point and dereference it with a negative offset inside the callback but that is going to be highly hacky and has quite a big chance off breaking on some platforms, CPUs or OSes.
    1 point
  22. You are right. By the way that is the example, that I used a while ago to study PostLVUserEvent just like you do. But you don't need the string manipulations from there. You're going to pass pBuffer pointer to LabVIEW with PostLVUserEvent function inside your callback and you should be done. Looking at your C code I see, you're doing more or less good. But you don't even need to implement the main function, because all the work with the cameras is made in your LabVIEW application entirely. You could remove that code at all. Besides the callback function you'll need one extra helper function, that would set your User Event refnum to some global variable in your DLL. That's needed, because when you'll want to call PostLVUserEvent, you'll need that refnum and you could take it out of that global variable. Something like this: #include <stdio.h> #include <iostream> #include <time.h> #include "Windows.h" #include "extcode.h" using namespace std; LVUserEventRef *pUE; void SendEvent(LVUserEventRef *rwer) { pUE = rwer; } void CALLBACK g_DataCallBack(LONG lRealHandle, DWORD dwDataType, BYTE *pBuffer, DWORD dwBufSize, void *pUser) { //your callback code here // ... //PostLVUserEvent(*pUE, (void *)&pBuffer); is here as well } Likely will require some small fine-tuning like adding extern "C" { ... } to escape functions name mangling.
    1 point
  23. There is also a presentation on youtube, which I found helpful: https://www.youtube.com/watch?v=xXGro_DylHs&ab_channel=LabVIEWArchitectsForum FWIW, a few lessons learned: The project provider VIs have to be installed in the LabVIEW resource folder. Distributing this through VIPM makes this super easy to specify this install location, and also turn on "require LabVIEW to restart" after install You have to restart LabVIEW to test code changes The code executes in a separate LabVIEW context. This can cause some weird behavior, I found that not all code runs in this context safely
    1 point
  24. Looks like The Pirate Bay is going to become NI support.
    1 point
  25. I think it's called project provider, there is a special interest group on NI Forums I've never fiddled with this but some have, here's a few other links : - https://forums.ni.com/t5/Developer-Center-Resources/Customize-the-LabVIEW-Project-Explorer-Using-the-Project/ta-p/3532774?profile.language=fr - https://forums.ni.com/t5/LabVIEW-Project-Providers/Project-Providers-Documentation/m-p/3492573#M285 -
    1 point
  26. 1) You don't, since it is not code. It is the function prototype (actually the function pointer declaration) of a function that YOU have to implement. And then you pass the name of that function as parameter to the other function that wants this callback function. Whenever that other function thinks it wants to tell YOU something it calls that callback function with the documented parameters and YOUR callback function implementation does then something with that data. But your function is called in the context of the other function at SOME time after you called the original function that you passed your callback function too. Are you still with me? If not, don't worry, most people have big trouble with that. If yes then you might have a chance to actually get this eventually solved. But don't expect to have this working tomorrow or next week. You have a steep learning curve in front of you. 2) The iCube Camera in is simply a LabVIEW class that handles the whole camera management in LabVIEW, and in some of its internal methods accesses the DLL interface, and creates the event message queue, and starts up an event handler, and ..., and ..., and .... 3) The RegEventCallback function is a LabVIEW node that you can use to register events on CERTAIN LabVIEW refnums. One of them are .Net refnums, IF the object class behind that refnum implements events. .Net events are the .Net way of doing callbacks. It is similarly complex to understand and implement but avoids a few of the more nasty problems of C callback pointers such as datatype safety. But to use that node you will need a .Net assembly that exposes some object class which supports some events of some sort. Since .Net uses typesafe interface descriptions, LabVIEW can determine the parameters that such an event has and create automatically a callback VI and connect it behind the scenes with the .Net event. It works fairly good but has a few drawbacks that can be painful during development. Once the callback VI has been registered and activated, it is locked inside LabVIEW and there are only two ways to get this VI back into an editable state. Restart LabVIEW or after the object classes on which the event occured have been properly closed (Close Reference node) you need to explicitly call the .Net Garbage Collector in LabVIEW to make .Net release the proxy caller that LabVIEW created and instantiated to translate between the .Net event and the LabVIEW callback VI. If you have a .Net assembly that exposes events for some of its object classes, it is usually quite a bit easier to interface from LabVIEW than trying to do callback pointers in a C(++) DLL/shared library. Writing an assembly in C# that implements events is also not really rocket science but definitely neither a beginners exercise. 4) If you interface to C(++) in LabVIEW there is no safety net, sturdy floor, soft cushions and a few trampolines to safe your ass from being hurt when something doesn't 100% match between what you told LabVIEW that the external code expects and what it really does expect. It's in the best case a hard crash with error message, the next best case is a hard crash with no error message and after that you are in the lands of maybes, good luck and sh*t storm. A memory corruption does not have to immediately crash your process, it could also simply overwrite your multimillion dollar experiment results without you noticing until a few hours later when the whole factory starts to blow up because it is operating on wrong data. So be warned, thread safely and make sure to have your C(++) solution tested by someone who really understands what the potential problems are, or don't use your code ever in a production environment. This is the biting in your ass that dadreamer talked about, and it is not really LabVIEW that did it, but you yourself! 5) Which video screen output are you talking about? Once you managed to get the camera data into your LabVIEW program without blowing up your lab? Well you could buy IMAQ Vision or start another project where you will need to learn a lot of new things to do it yourself. 🙂
    1 point
  27. There is a reason why so many pleas for support of camera access are out there and no single properly working solution except paid toolkits: Actually it's not one but a plethora of reason. - cameras use lots of different interfaces - they often claim to follow a certain standard but usually don't do so to the letter - there are about several dozen different communication standards that a camera manufacture can (try) to follow - it involves moving lots of data AFAP which requires good memory management from the end user application down to the camera interface through many layers of software often from different manufacturers - it costs time, time and even more time to develop - it is fairly complex and not many people have the required knowledge to do it beyond a "Look mom it keeps running without needing to hold its hands (most of the time)" Callback function are not really magic, but there is a reason they are only mentioned in the advanced section of all programming textbooks I know (if they are mentioned at all). Most people tend to have a real hard time to wrap their mind around them. It starts for many already with simple memory pointers but a call back function is simply a memory pointer on steroids. 😀 And just when you think you mastered them you will discover that you haven't really started, as concurrency and multithreading try to not only throw a wrench in your wheels but an entire steam roller.
    1 point
  28. I won't be there this year, and haven't heard anything about an official BBQ. Fingers crossed for a full normal NI Week next year.
    1 point
  29. I have no experience with these cameras or Hikvision SDK, but some things on your BD caught my eye immediately. Looks like the HCNetSDK.dll developer made all the functions to have stdcall calling convention, whereas your CLFN's use cdecl calling convention. You've set NET_DVR_Login_V30 CLFN to accept only 4 input parameters, but the function wants 5 parameters. You've set NET_DVR_Logout CLFN to accept 4 parameters, but the function needs only 1 parameter. In some CLFN's the parameter types don't match the prototypes exactly, e.g. wPort should be U16 (WORD), not U32 (DWORD). Use Windows Data Types table to find out, what WinAPI types represent. These are the prototypes for NET_DVR_Login_V30 and NET_DVR_Logout (as written in Device Network SDK Programming User Manual V4.2): LONG NET_DVR_Login_V30( char *sDVRIP, WORD wDVRPort, char *sUserName, char *sPassword, LPNET_DVR_DEVICEINFO_V30 lpDeviceInfo ) BOOL NET_DVR_Logout(LONG lUserID)
    1 point
  30. It's fine I moved it to a category I think fits. The Lounge would also work, which is a catch all. In the future feel free to use the Report to Moderator function giving text about what you want to have happen to a thread.
    1 point
  31. With packages you can include files (e.g. an installer), and put them where you want them. You can also call post-install scripts. I think if there is a way to call the installer silently from the CLI you could script this. You are starting to tread on IT's world though, but sometimes you need to get it done and for it to work so perhaps you are best off doing it yourself this way Seriously though if they have the systems in a domain or something they might be able to handle the environment setup independently of your NI packages.
    1 point
  32. That's almost like asking if you can install a GM engine into a Toyota. 😀 Answer is yes you can if you are able to rework the chassis, and make just about a few thousand other modifications. But don't expect support from either of the two if you ran into a snag. More seriously, you may also run into license issues.
    1 point
  33. Yes and acquisitions like DasyLab, MCC and some others clearly fall into the category of buying out competition. Lookout, Electronic Workbench and HiQ are a bit a different sort of story. They were bought to buy knowhow and specific market presence and were for some time actively supported and improved by NI. But then they discovered that they could not compete with the big guys in those markets unless they would be willing to really invest lots and lots of money. And I don't mean a few millions but for each of them a real significant junk of the entire budget that NI had for the whole operation. The other problem was that most of the NI sales people had pretty much no idea about what they really were and consequently couldn't sell them very effectively. Their natural instinct was to point at LabVIEW whenever someone came with an inquiry, even if one of these packages would have fit the customer much better. I think it's unfortunate for each of those three. They were very unique in some ways and would have deserved a more active supportive development by their owner. Electronics Workbench had a dominant role in the educational market by the time NI bought it but is nowadays nothing more than an anecdote in the history of electronic design and development tools. That's for a big part thanks to NI's inactivity and disinterest in it. But if NI hadn't bought it it probably would have ended up as another product of Autodesk or similar, that would sort of market it but really try to nudge the user with soft force into moving to their main product instead. And nothing much would have changed. 😀 Lookout wasn't the biggest player in the market by far but its architecture was very clean and very unique and not encumbered by countless legacy hacks from other SCADA packages that existed in the market since when DOS was the main operating system for them. HiQ was more like Mathematica than Matlab in many ways but still different enough to deserve an independent existence. Of those three only Matlab remains as still a surviving product. Digilent would seem to be again a somewhat different story. I can not see where they possibly could have been a significant competition to NI nor what NI was really expecting from it. I think that it was more acquired as a pretty unfinished idea to create a stronger educational presence and then the market analysts came and killed that idea. MCC in the new NI also clearly isn't any competition anymore to anything they do. Rather it could serve as the entity that combines all the remains of old NI and some of the brands that still have some promises and haven't faltered beyond the possibility of reanimation.
    1 point
  34. You have some serious undefined behaviour in your c code. In create_copy_adress_Uint you dereference an uninitialized pointer, writing in a random location. In get_adress_Uint you return the address of a stack variable that is invalid as soon as the function returns. You are going to experience lots of crashing. Have you looked at the configuratrion options for the call library node? You can just pass parameters by pointer. Passing an array by "array data pointer" will let you manipulate the data as in C (but do not try to free that memory). You do not need to make a copy. Be mindful of the lifetime. That pointer is only valid during the function call and might be invalidated later. So don't keep it around after your function returns. If you also want to resize LabVIEW data structures, there are memory manager functions to do that. Pass the array by handle and use DSSetHandleSIze or NumericArrayResize. Examples for interfacing with DLLs are here: examples\Connectivity\Libraries and Executables\External Code (DLL) Execution.vi
    1 point
  35. Ehh why not... <gets chair and looks intensely at camera> I think that NI will sell in the next 2-3 years. I agree with X on the churn rate. There's zero chance NI comes out on top in the long term with this plan. NXG is dead; LabVIEW as a competitive language is no more from a professional standpoint. It's firmly an enthusiast language now. That means like other enthusiast languages it's user base will continue to shrink from here on out. Now you've got two options to deal with this problem; embrace it or hasten it's demise. NI is obviously going with the later. 2-3 (maybe 5?) years of increased revenue while people work their way off the LabVIEW bandwagon (which they were going to do anyways when NXG was nuked) and then they are moving on. It's possible NI just understands the 'make hay while the sun is shining' concept and are going to get every value out of the product in the next half decade because, either way, LabVIEW is dead weight on the company in 5-10 years. The other possibility is that subscription revenue has a higher impact on company value (on paper) than on-off sales. I think subs are a 2-3x multiplier on estimated value. If NI is looking to sell, moving everything to subs and holding for a couple years until they hit the peak of the revenue curve in 2025 and then shopping for a buyer makes the company look 50-100% more valuable than it was in 2021. All that's conjecture and theory. I'm more than happy to be proven incorrect, but I believe I am saying the quite part out loud here and I think that's a good thing. (I hope) Best Tim
    1 point
  36. You are playing with fire. Ownership is key. DO NOT manipulate pointers in LabVIEW-period! You either manipulate data by passing it to a DLL (like an array where LabVIEW owns the data) or you provide functions to manipulate data (where the DLL owns the data - where is your freeing of the pointer allocated inside the DLL?). LabVIEW has no ability to know what a DLL is doing with memory and vice versa. You must also take into account the pointer size. (32 bit LabVIEW or 64 bit LabVIEW). For some types, this is handled for you (arrays, for example) others you will want to use the Unsigned/Signed Pointer sized Variable (for opaque pointers) and pass that BY VALUE to other functions. Look at the Function Prototype in the dialogue. You will see the C equivalent of the call. Note that you do not seem to be able to do things things like int32_t myfunc(&val). Instead you have to use "Pointer to Value" and it will look like int32_t myfunc(int32_t *val). If you are trying to manipulate pointers, you are doing it wrong and it will crash at some point taking the whole IDE with it.
    1 point
  37. I have validated this library against LibreOffice 7.2 and Windows 10.
    1 point
  38. We are working on it. We just need more funding...
    1 point
  39. Actually, the nipkg.ini is located at the path specified in the KB (\%localappdata%\National Instruments\NI Package Manager\nipkg.ini.)
    1 point
  40. Some people would say that that is your problem. Others that it is a bliss. 😀
    1 point
  41. Community Edition does have Application Builder. Are you saying you think they're looking to discontinue LabVIEW? If they aren't going to sell it anymore, it would be great if they'd make it open source. Not sure how likely that is though.
    1 point
  42. Hi Lavans I'm working on releasing our Medulla ViPER Dependency Injection Framework to the community as an open source project. ViPER has been a labor of love that I have been working on for close to 8 years. The motivation to develop ViPER was to reduce the cost, time and frustration involved in deploying test systems in highly regulated industries such as medical device manufacturing. The big problem that ViPER solves is that change does not require you to perform a full top to bottom verification of the system, only the new or changed component needs to be verified. We used ViPER at Cochlear to test implants and sound processors and is the standard architecture used within the enterprise. ViPER was also used to develop a system to parallel test up to 100 Trophon 2 units simultaneously for Nanosonics by implementing a Test Server running on an NI Industrial Controller. HMI Clients were implemented on tablets for operators, engineers and admins. Although ViPER is useful for test its not used just for test systems, you can build any system with ViPER. ViPER is a plugin architecture, it implements a recursive factory creator that injects pre-built (and verified) components into a system at runtime defined by a Object Definition Document. ViPER can build rich and deep object hierarchies, even inject into ancestors as well. Components include soft front panels and attribute and configuration viewer and are built on GDS4 class architecture. ViPER systems are also slim and efficient because they are not carrying around redundant classes in their builds that may or may not be needed. ViPER includes an Object Editor that allows you to create or edit the Object Definition Document but is also a useful engineering tool allowing you to navigate the object hierarchy, configure and launch Soft Front Panels for any sub objects. Included is a Project template that allows you to create your own ViPER Components. I presented ViPER the GLA Summit last year and to the Sydney LabVIEW User Group, I've posted the Video of the presentation on LinkedIn, I'm keen to find a few gurus to have play with it before I release it. ViPER: A Dependency Injection Framework for LabVIEW Cheers Kurt
    1 point
  43. Does it help to re-ask the question as "where should LabVIEW have a future?" It is not difficult to name a number of capabilities (some already stated here) that are extremely useful to anyone collecting or analyzing data that are either unique, or much simpler, using LabVIEW. They're often taken for granted and we forget how significant they are and how much power they unlock. For example (and others can add more): FPGA - much easier than any text-based FPGA programming, and so powerful to have deterministic computational access to the raw data stream Machine vision - especially combined with a card like the 1473R, though it's falling behind without CoaXPress Units - yes no-one uses them , but they can extend strict programming to validation of correct algorithm implementation Parallel and multi-threaded programming - is there any language as simple for constructing parallel code? Not to mention natural array computations Real-time programming Data-flow - a coherent way of treating data as the main object of interest, fundamental, yet a near-unique programming paradigm with many advantages and all integrated into a single programming environment where all the compilation and optimization is handled under the hood (with almost enough ability to tweak that) Unfortunately NI appear to be backing away from many of these strengths, and other features have been vastly overtaken (image processing has hardly been developed in the last 10 years, GUI design got sidetracked into NXG unfortunately). But the combination of low-level control in a very high-level language seems far too powerful and useful to have no future at all.
    1 point
  44. Just so everyone is aware of what the conclusion of this was, and thank you everyone for your help here. After lots of discussion with our NI rep and R&D, it was determined that R&D purposefully did NOT implement any VISA capabilities for the NI PXIe-4080 DMM, even the ability to enumerate the device. They recommended these two things, neither of which are good options for our architecture or requirements: Use NI's proprietary System Config API to dynamically find the PXIe-4080 DMM. I don't want to transfer my entire framework to this proprietary approach (nor do I believe it would cover all the bases VISA Find does). That's what a standard like VISA is for, which any PXI device should support (at least VISA enumeration/find). Create an INI/INF file using the VISA Wizard (https://www.ni.com/docs/en-US/bundle/ni-visa/page/ni-visa/usingddwtoprogrampxipcidevice.html), however I don't have access to a Certificate Authority (CA) to make that installable on Win10, nor can I even install the Windows Driver Kit (WDK) on my machine due to IT Security restrictions without particularly difficult approval. NI R&D refused to do the (relatively small) work to create this set of files to fix this oversight. So at the end of the day, this PXIe device is not VISA capable at all, and they designed it that way. Our project is moving to swap what PXIe-4080 cards we already have to PXI-4070s (which do support VISA enumeration/find/etc.), and future PXI DMM purchases for our setups will likely be Keysight M918xA's, assuming they play nice with NI-VISA on an NI PXI chassis. I wanted to let folks know that this model isn't fully compliant to the PXI standard (although they tried to claim that they meet the letter of the requirements in a particularly lawyerly way, but certainly not the way any NI customer would read it), and I'm a bit concerned this may be the case with future cards - be aware with NI PXI devices that they might not support VISA anymore.
    0 points
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.