Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation since 05/15/2022 in all areas

  1. You should be using $[*] or $[0] to indicate Array elements; $.[*] indicates all items in a JSON Object and $.[0] is the Object item named "0". Look at the detailed Help page for JSON Path notation in JSONtext.
    2 points
  2. The SQLite API for LabVIEW had a feature for that. Very easy with a database. I suppose you could do something similar just by saving and loading a particularly named JSON file.
    2 points
  3. Two suggestions: 1) Consider using JSON as your config-data format, rather than clusters. Using JSONtext to manipulate JSON will be faster than using OpenG tools to manipulate clusters. 2) Programmatically get an array of references to all your config-window controls and register a single Event case for Value Change of any one of them. Then use their (hidden) labels to encode what config item they set. For example, your control with the caption "Baud Rate" could have the hidden label "$.Serial Settings.Baud Rate" which is the JSONpath to set in your config JSON (or config clusters).
    2 points
  4. Pointers are pointers. If you use DSNewPtr() and DSDisposePtr() or malloc() and free() doesn't matter to much, as long as you stay consistent. A Pointer allocated with malloc() has to be deallocated with free(), a DSNewPtr() must be deallocated with DSDisposePtr() and a pointer allocated with HeapAlloc() must be deallocated with HeapFree(), etc. etc. They may in the end all come from the same heap (likely the Windows Heap) but you do not know and even if they do, the pointer itself may and often is different since each memory manager layer adds a little of its own layer to manage the pointers itself better. To make matters worse, if you resolve to use malloc() and free() you always have to do the according operations in the same compilation unit. Your DLL may be linked with gcc c-lib 6.4 and the calling application with MS C Runtime 14.0 and while both have a malloc() and free() function they absolutely and certainly will not operate on the same heap. Pointers are non-relocatable as far as LabVIEW is concerned and LabVIEW only uses them for clusters and internal data structures. All variable sized data on the diagram such as arrays and strings is ALWAYS allocated as handle. A handle is a pointer to a pointer and the first N int32 elements in the data buffer are the dimension size, followed directly with the data and possibly memory aligned if necessary, N being here the number of dimensions. Handles can be resized with DSSetHandleSize() or NumericArrayResize() but the size of the handle does not have to be the same as the size elements in the array that indicate how many elements the array hold. Obviously the handle must always be big enough to hold all the data, but if you change the size element in an array to indicate that it holds fewer elements than before, you do not necessarily have to resize the handle to that smaller size. Still if the change is big, you anyhow absolutely should do it but if you reduce the array by a few elements you can forgo the resize call. There is NO way to return pointers from your DLL and have LabVIEW use them as arrays or strings, NONE whatsoever! If you want to return such data to LabVIEW it has to be in a handle and that handle has to be allocated, resized, and deallocated with the LabVIEW memory manager functions. No exception, no passing along the start and collecting your start salary, nada niente nothing! If you do it this way, LabVIEW can directly use that handle as an array or string, but of course what you do in C in terms of the datatype in it and the according size element(s) in front of it must match exactly. LabVIEW absolutely trusts that a handle is constructed the way it wants it and makes painstakingly sure to always do it like that itself, so you better do so too. One speciality in that respect. LabVIEW does explicitly allow for a NULL handle. This is equivalent to an "empty" handle with the size elements set to 0. This is for performance reasons. There is little sense to invoke the memory manager and allocated a handle just to store in it that there is not data to access. So if you pass handle datatypes from your diagram to your C function, your C function should be prepared to deal with an incoming NULL handle. If you just blindly try to call DSSetHandleSize() on that handle it can crash as LabVIEW may have passed in a NULL handle rather than a valid empty handle. Personally I prefer to use NumericArrayResize() at all times as it deals with this speciality already properly and also accounts for the actual bytes needed to store the size elements as well as any platform specific alignment. A 1D array of 10 double values does require 84 bytes on Win32, but 88 bytes on Win64, since under Win64 the array data elements are aligned to their natural size of 8 bytes. When you use DSSetHandleSize() or DSNewHandle() you have to account for the int32 for the size element and the possible alignment yourself. If you use err = NumericArrayResize(fD, 1, (UHandle*)&handle, 10) You simple specify in its first parameter that it is an fD (floatDouble) data type array, there is 1 dimension, passing the handle by reference, and the number of array elements it should have. If the array was a NULL handle, the function allocates a new handle of the necessary size. If the handle was a valid handle instead, it will resize it to be big enough to hold the necessary data. You still have to fill in the actual size of the array after you copied the actual data into it, but at least the complications of calculating how big that handle should be is taken out of your hands. Of course you also always can go the traditional C way. The caller MUST allocate the memory buffer big enough for the callee to work with, pass its pointer down to the callee which then writes something into it and then after return, the data is in that buffer. The way that works in LabVIEW is that you MUST make sure to allocate the array or string prior to calling the function. InitializeArray is a good function for that, but you can also use the Minimum Size configuration in the Call Library Node for array and string parameters. LabVIEW allocates a handle but when you configure the parameter in the Call Library Node as a data pointer, LabVIEW will pass the pointer portion of that handle to the DLL. For the duration of that function, LabVIEW guarantees that that pointer stays put in place in memory, won't be reused anywhere else, moved, deallocated or anything else like that (unless you checked the constant checkbox in the Call Library Node for that parameter). In that case LabVIEW will use that as hint that it can pass the handle also to other functions in parallel that are also marked to not going to try to modify it. It has no way to prevent you from writing into that pointer anyhow in your C function but that is a clear violation of the contract you yourself set up when configuring the Call Library Node and telling LabVIEW that this parameter is constant. Once the function returns control to the LabVIEW diagram, that handle can get reused, resized, deallocated at absolutely any time and you should therefore NEVER EVER hold onto such a pointer beyond the time when you return control back to LabVIEW! That's pretty much it. Simple as that but most people fail it anyhow repeatedly.
    2 points
  5. sounds like they were too busy updating NI Logo and colors to implement VISA. Oh well.
    1 point
  6. I wanted to have a 2D drawing of my house's layout, so I could find have a clear picture of what outlets and lights were on what breakers. I ended up using PowerPoint because it had tons of shapes and was easy to use. I tried a couple of other 2D drawing things first but for a one off I figured it was easy enough. I showed it to a friend who was impressed that it was PowerPoint. Here it is, not quite finished. But this reminded me of a presentation I saw a couple years ago about how PowerPoint has lots of unused powerful features. It is an hour long, so maybe skip out on some in the middle. But experimenting with the morph features, 3D models, fractals, context aware designs, and using it for full screen programs are some of the topics. I especially love the library of 3D models.
    1 point
  7. I don't understand the connection. They were running it on a low power laptop. They were a student. They were (and continue to) be concerned with the climate. They were (and continue) to consider themselves poor. Not that it matters.
    1 point
  8. Generally if you use an external library to do something because that library does things that LabVIEW can't: Go your gang! If you try to do an external library to operate on multidimensional arrays and do things on them that LabVIEW has native functions for: You are totally and truly wasting your time. Your C compiled code may in some corner cases be a little faster, especially if you really know what you are doing on C level, and I do mean REALLY knowing not just hacking around until something works. So sit back relax and think where you need to pass data to your external Haibal library to do actual stuff and where you are simply wasting your time with premature optimization. So far your experiments look fine as a pure educational experiment in itself, but they serve very little purpose in trying to optimize something like interfacing to a massive numerical library like Haibal is supposed to get. What you need to do is to design the interfaces between your library and LabVIEW in a way to pass data around. And that works best by following as few rules as possible, but all of them VERY strictly. You can not change how LabVIEW memory management works Neither can you likely change how your external code wants his data buffers allocated and managed. There is almost always some impedance mismatch between those two for any but the most simple libraries. The LabVIEW Call Library Node allows you to support some common C scenarios in the form of data pointers. In addition it allows you to pass its native data to your C code, which every standard library out there simply has no idea what to do with. Here comes your wrapper shared library interface. It needs to manage this impedance mismatch in a way that is both logical throughout and still performant. Allocating pointers in your C code to pass back and forth across LabVIEW is a possibility but you want to avoid that as much as possible. This pointer is an anachronisme in terms of LabVIEW diagram code. It exposes internals of your library to the LabVIEW diagram and in that way makes access possible to the user of your library that 99% of your users have no business to do nor are they able to understand what they are doing. And no, saying don't do that usually only helps for those who are professionals in software development. All the others believe very quickly they know better and then the reports about your software misbehaving and being a piece of junk start pouring in.
    1 point
  9. I was wondering about that too. But then the scrollbars in the image he posted seem to indicate that that VI is actually properly inserted and the Insert VI method doesn't seem to return an error either. With the limited information that he tends to give and the limited LabVIEW knowledge he seems to have, it is all very difficult to debug remotely though. And it is not my job to do really. Edit: I'll be damned! A VI inserted into a Subpanel does not have a window handle at all. I thought I had tested that but somehow got apparently misled in some ways. LabVIEW seems to handle that all internally without using any Windows support for that. So back to the drawing board to make that not a Subpanel window but instead using real Windows Child window functionality. I don't like to use the main VIs front panel as the drawing canvas as the library would draw all over the front panel and fighting LabVIEWs control and indicator redraws. As to the NET_DVR_GetErrorMessage() call I overlooked that one. Good catch and totally unexpected! It seems that the GetLastError() call is redundant when calling this function as GetErrorMessage() is not just a function to translate an error code but really a full replacement for GetLastError(). Highly unusual to say the least but you get that for reading the documentation not to the last letter. 😆 It's hard to debug such a software without having any hardware to test with, so the whole library that I posted is in fact a dry exercise that never has run in any way as there is nothing it can really run with on my system. Same about the Callback code. I tested that it compiles (with my old but trusted VS2005 installation) but I can not test that it runs properly. Well I could but that would require to write even more C code to create a test harness that would simulate the Hikvision SDK functionality. I like to tinker with this kind of problems but everything has its limits when it is just a hack job in my free time.😀 Attached is a revisited version of the library with the error Vi fixed and it does not use a SubPanel for now but simply lets the Empty.vi stand on its own for the moment. Quick and dirty but we can worry about getting that properly embedded in the main VI after it has proven to work like this. HKNetSDK Interface.zip
    1 point
  10. Looks like The Pirate Bay is going to become NI support.
    1 point
  11. You need to understand what Managed code means. In .Net that is a very clear and well defined term and has huge implications. LabVEW is a fully managed environment too and all the same basic rules apply. C on the other hand is completely unmanaged. Who owns a pointer and who is allowed to do anything with it, even reading from it, and when, is completely up to contracts that each API designer defines himself. And if you as caller don't adhere to that contract to the letter, no matter how brain damaged or undocumented it is, you are in VERY DEEEEEEEP trouble. LabVIEW (and .Net and (D)COM and others like it) all have a very well defined management contract. Well defined doesn't necessarily mean that it is simple to understand, or that there are lengthy documentations that detail everything about it in detail. Not even .Net has an exhaustive documentation. Much of it is based on some basic rules and a set of APIs to use to guarantee that the management of memory objects is fully consistent and protected throughout the lifetime of each of those objects. Mixing and matching those ideas between each environment is a guaranteed recipe for disaster. Not understanding them as you pass around data is that too! For other platforms such a Linux and MacOSX there also exist certain management rules and they are typically specific to the used API or group of API. For instance it makes a huge difference if you use old (and mostly depreciated) Carbon APIs or modern Cocoa APIs. They share some common concepts and some of its data types are even transferable between those two without invoking costly environmental conversions, but at that point stops the common base. Linux is according to its heritage a collection of very differing ideas and concepts. Each API tends to follow its own specific rules. Much of it is very logical, once you understand the principles of safe and managed memory. Until then it all looks like incomprehensible magic and you are much better off to stay away from trying to optimize memory copies and such things to squeeze out a little more performance. One of the strengths of LabVIEW is that it is very difficult to make code that crashes your program. That is until you venture into accessing external code. Once you do that your program is VERY likely to crash randomly or not so randomly, unless you fully understand all the implications and intricacies of working that way. The pointer from a LabVIEW array or string, passed to the Call Library Node, only is guaranteed to exist for the time your function runs. Once your function returns control back to LabVIEW it reserves the right to reallocate, resize, delete, or reuse that memory buffer for anything it deems necessary. This part is VERY important to allow LabVIEW to optimize memory copies of large buffers. If you want to have a buffer that you can control yourself you have to allocate it yourself explicitly and pass its reference around to wherever it is needed. But do not expect LabVIEW to deallocate it for you. As far as LabVIEW is concerned it does not know that that variable is a memory buffer, nor when it is not anymore needed or which heap management routines it should use to properly deallocate it. And don't expect it to be able to directly dereference the data in that buffer to display it in a graph for instance. As far as LabVIEW is concerned, that buffer is simply a scalar integer that is nothing more than a magic number that could mean how many kilometers the moon is away or how many seconds exist in the universes life, or how many atoms fit in a cup of tea or anything else you fancy. Or you pass the native LabVIEW buffer handle into the Call Library Node and use the LabVIEW memory manager functions if you have to resize or deallocate them. That way you can use LabVIEW buffers and adhere to the LabVIEW management contract. But it means that that part of your external code can only run when called from LabVIEW. Other environments do not know about these memory management functions and consequently can not provide compatible memory buffers to pass into your functions. And definitely don't ever store such handles somewhere in your external code to access them asynchronously from elsewhere once your function has returned control to LabVIEW. That handle is only guaranteed to exist for the duration of your function call as mentioned above. LabVIEW remains in control of it and will do with it whatever it pleases once you return control from your function call to the LabVIEW diagram. It could reuse it for something entirely different and your asynchronous access will destroy its contents or it could simply deallocate it and your asynchonous access will reach into nirvana and send your LabVIEW process into "An Access Violation has occurred in your program. Save any data you may need and restart the program! Do it now, don't wait and don't linger, your computer may start to blow up otherwise!" 😀 And yes, one more advice. Once you start to deal with external code anywhere and in anyway, don't come here or on the NI forum and ask why your program crashes or starts to behave very strange and if there was a known LabVIEW bug causing this. Chances are about 99.25678% that the reason for that behaviour is your external code or the interface you created for it with Call Library Nodes. If your external code tries to be fancy and deals with memory buffers, that chance increases with several magnitudes! So be warned! In that case you are doing something fundamentally wrong. Python is notoriously slow, due to its interpreted nature and the concept of everything is an object. There are no native arrays as this is represented as a list of objects. To get around that numpy uses wrapper objects around external managed memory buffers that allow consecutive representations of arrays in one single memory object and fast indexing into them. That allows numpy routines to be relatively fast when operating on arrays. Without that, any array like manipulation tends to be dog slow. LabVIEW is fully compiled and uses many optimizations that let it beat Python performance with hands tied on its back. If your code runs so much slower in LabVIEW, you have obviously done something wrong and not just tied its hands on its back but gagged and hogtied it too. Things that can cause this are for instance Build Array nodes inside large loops if we talk about LabVIEW diagram code and bad external code management if you pass large arrays between LabVIEW and your external code. But the experiments you show in your post may be interesting exercises but definitely go astray in trying to solve such issues.
    1 point
  12. Just so everyone is aware of what the conclusion of this was, and thank you everyone for your help here. After lots of discussion with our NI rep and R&D, it was determined that R&D purposefully did NOT implement any VISA capabilities for the NI PXIe-4080 DMM, even the ability to enumerate the device. They recommended these two things, neither of which are good options for our architecture or requirements: Use NI's proprietary System Config API to dynamically find the PXIe-4080 DMM. I don't want to transfer my entire framework to this proprietary approach (nor do I believe it would cover all the bases VISA Find does). That's what a standard like VISA is for, which any PXI device should support (at least VISA enumeration/find). Create an INI/INF file using the VISA Wizard (https://www.ni.com/docs/en-US/bundle/ni-visa/page/ni-visa/usingddwtoprogrampxipcidevice.html), however I don't have access to a Certificate Authority (CA) to make that installable on Win10, nor can I even install the Windows Driver Kit (WDK) on my machine due to IT Security restrictions without particularly difficult approval. NI R&D refused to do the (relatively small) work to create this set of files to fix this oversight. So at the end of the day, this PXIe device is not VISA capable at all, and they designed it that way. Our project is moving to swap what PXIe-4080 cards we already have to PXI-4070s (which do support VISA enumeration/find/etc.), and future PXI DMM purchases for our setups will likely be Keysight M918xA's, assuming they play nice with NI-VISA on an NI PXI chassis. I wanted to let folks know that this model isn't fully compliant to the PXI standard (although they tried to claim that they meet the letter of the requirements in a particularly lawyerly way, but certainly not the way any NI customer would read it), and I'm a bit concerned this may be the case with future cards - be aware with NI PXI devices that they might not support VISA anymore.
    0 points
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.