Leaderboard
Popular Content
Showing content with the highest reputation since 05/08/2025 in Posts
-
So a couple of years ago I was reading about the ZLIB documentation on compression and how it works. It was an interesting blog post going into how it works, and what compression algorithms like zip really do. This is using the LZ77 and Huffman Tables. It was very education and I thought it might be fun to try to write some of it in G. The deflate function in ZLIB is very well understood from an external code call and so the only real ever so slight place that it made sense in my head was to use it on LabVIEW RT. The wonderful OpenG Zip package has support for Linux RT in version 4.2.0b1 as posted here. For now this is the version I will be sticking with because of the RT support. Still I went on my little journey trying to make my own in pure LabVIEW to see what I could do. My first attempt failed immensely and I did not have the knowledge, to understand what was wrong, or how to debug it. As a test of AI progression I decided to dig up this old code and start asking AI about what I could do to improve my code, and to finally have it working properly. Well over the holiday break Google Gemini delivered. It was very helpful for the first 90% or so. It was great having a dialog with back and forth asking about edge cases, and how things are handled. It gave examples and knew what the next steps were. Admittedly it is a somewhat academic problem, and so maybe that's why the AI did so well. And I did still reference some of the other content online. The last 10% were a bit of a pain. The AI hallucinated several times giving wrong information, or analyzed my byte streams incorrectly. But this did help me understand it even more since I had to debug it. So attached is my first go at it in 2022 Q3. It requires some packages from VIPM.IO. Image Manipulation, for making some debug tree drawings which is actually disabled at the moment. And the new version of my Array package 3.1.3.23. So how is performance? Well I only have the deflate function, and it only is on the dynamic table, which only gets called if there is some amount of data around 1K and larger. I tested it with random stuff with lots of repetition and my 700k string took about 100ms to process while the OpenG method took about 2ms. Compression was similar but OpenG was about 5% smaller too. It was a lot of fun, I learned a lot, and will probably apply things I learned, but realistically I will stick with the OpenG for real work. If there are improvements to make, the largest time sink is in detecting the patterns. It is a 32k sliding window and I'm unsure of what techniques can be used to make it faster. ZLIB G Compression.zip5 points
-
Phew that is a pretty strong opinion! Although I personally am not a fan of the overall style of DQMH none of my problems are with the scripting/wizards or placeholder text. I think any framework that tries to do "a lot" will be complicated... your own personal framework (which you likely find trivial to use) is likely to be a bit weird to others. DQMH is extremely popular for a reason... To paraphrase the words of a wiser person than I, "please don't yuck someone elses yum"3 points
-
It is not that LabVIEW MAY unregister the reference, but that it WILL unregister the reference as soon as the top level VI in whose hierarchy the reference was created goes idle. This is by design and the only way to prevent that is to either keep that hierarchy active until any other user of that refnum has finished or delegate creating of the refnum to the place where it is needed, for instance through a LV2 style global maintaining the reference in a shift register and when being called for the first time it will create the refnum if the shift register contains an invalid refnum. True Actor Framework design kind of mandates that all refnums are created in the context of where they are used not some other global instance that may or may not keep running for the time some Actor is using the refnum.2 points
-
Hi everyone, Just want to share our open source project "Labview Python Bridge". Connect labview apps with python apps in realtime with multi-processing data queues. https://github.com/jmor2000/labview_python_bridge If anyone has any questions or suggestions for new developments / features, let me know. Cheers Jeff2 points
-
Are you seriously expecting anyone to install a random executable on their system from an unknown publisher, provided by an anonymous person on the web, where one can't even get a proper link in Google to the actual company page? Sorry, but anyone doing that should not be allowed near 5m of a computer system!2 points
-
Absolutely echo what Shaun says. Nobody banned them. But most who tried to use them have after some more or less short time run from them, with many hairs ripped out of their head, a few nervous tics from to much caffeine consume and swearing to never try them again. The idea is not really bad and if you are willing to suffer through it you can make pretty impressive things with them, but the execution of that idea is anything but ideal and feels in many places like a half thought out idea that was eventually abandoned when it was kind of working but before it was a really easily usable feature.2 points
-
Seems like this one has "escaped everyone's grasp" too. ParallelLoop.ShowAllSchedules=True Because was only checked from the password-protected diagram of ParallelForLoopDialog.vi (LabVIEW 20xx\resource\dialog). Present since LabVIEW 2010. When activated, allows to apply more advanced iteration partitioning schedule. In other words, instead of this you will get this Сould this be useful? I can't say. Maybe in some very specific use-cases. In my quick tests I didn't manage to get increase in any productivity. It's easy to mess up with those options and make things worse, than by default. Also can be changed by this scripting counterpart.2 points
-
Look at this new download on VIPM https://www.vipm.io/package/bjm_lib_request_power/2 points
-
You want an ability to override the Equality or Comparison operators? I'm unsure, whether it really existed in OpenG packages, but now you have those neat malleable VIs, that let you do that: Search Unsorted 1D Array , Sort 1D Array , Search Sorted 1D Array. They have an additional input to specify your own equals or less function in a form of a custom comparison class or a VI refnum. There's an article to help: Creating a Custom Sorting Function in LabVIEW2 points
-
This is exactly what was said in that ancient thread: Tree control in labview. So if you add 65536*N to the Item Symbols property of the Listbox and have the "Enable Indentation" option activated, you shift the symbol/glyph and the text N levels to the right. Could be useful for simple 'parent-child' relationships, if you don't want to use a Tree. And still it's used in Find Examples / NI Example Finder window:2 points
-
I once went for an interview where they gave me a coding test and asked me to modify it. It was a very long time ago so I don't remember the exact modification they wanted (nothing to do with memory leaks) but I do remember the obtain queue and read queue inside a while loop with the release queue outside. I asked if they wanted me to also fix the memory leak as well as the modifications and they were a little puzzled until I explained what you have just said. I must have seen (and fixed) this while-loop bug-pattern a thousand times since then in various code bases. I also created this VI which I generally use instead of the primitives as it intialises on first call, can be called from anywhere, and prevents most foot-shooting by rolling them all into a single VI and ensuring all references but 1 are closed after use. Queue.vi2 points
-
2 points
-
In the past I have used the IMAQ drivers for getting the image, which on its own does not require any additional runtime license. It is one of those lesser known secrets that acquiring and saving the image is free, but any of the useful tools have a development, and deployment license associated with it. I've also had mild success with leveraging VLC. Here is the library I used in the past, and here is another one I haven't used but looks promising. With these you can have a live stream of a camera as long as VLC can talk to it, and then pretty easily save snapshots. EDIT: The NI software for getting images through IMAQ for free is called "NI Vision Common Resources". This LAVA thread is where I first learned about it.2 points
-
You can 'renew' your LabVIEW CE license this way if you haven't tried it: Go to https://www.ni.com Hover over your user icon in the upper right and select "My Account". Scroll down to "Products and Services" and select "View my products". Scroll down to find your LabVIEW Community Edition and select "Renew" from the drop-down menu to the right. Apologies if you've already tried this and still had issues. Maybe someone else will find it useful.1 point
-
1 point
-
Reentrant execution may be a safe option. Have to check the function. The zlib library is generally written in a way that should be multithreading safe. Of course that does NOT apply to accessing for instance the same ZIP or UNZIP stream with two different function calls at the same time. The underlaying streams (mapping to the according refnums in the VI library) are not protected with mutexes or anything. That's an extra overhead that costs time to do even when it would be not necessary. But for the Inflate and Deflate functions it would be almost certainly safe to do. I'm not a fan of making libraries all over reentrant since in older versions they were not debuggable at all and there are still limitations even now. Also reentrant execution is NOT a panacea that solves everything. It can speed up certain operations if used properly but it comes with significant overhead for memory and extra management work so in many cases it improves nothing but can have even negative effects. Because of that I never enable reentrant execution in VIs by default, only after I'm positively convinced that it improves things. For the other ZLIB functions operating on refnums I will for sure not enable it. It should work fine if you make sure that a refnum is never accessed from two different places at the same time but that is active user restraint that they must do. Simply leaving the functions non-reentrant is the only safe option without having to write a 50 page document explaining what you should never do, and which 99% of the users never will read anyways. 😁 And yes LabVIEW 8.6 has no Separated Compiled code. And 2009 neither.1 point
-
A Timestamp is a 128 bit fixed point number. It consists of a 64-bit signed integer representing the seconds since January 1, 1904 GMT and a 64-bit unsigned integer representing the fractional seconds. As such it has a range of something like +- 3*10^11 years relative to 1904. That's about +-300 billion years, about 20 times the lifetime of our universe and long after our universe will have either died or collapsed. And the resolution is about 1/2*10^19 seconds, that's a fraction of an attosecond. However LabVIEW only uses the most significant 32-bit of the fractional part so it is "only" able to have a theoretical resolution of some 1/2*10^10 seconds or 200 picoseconds. Practically the Windows clock has a theoretical resolution of 100ns. That doesn't mean that you can get incremental values that increase with 100ns however. It's how the timebase is calculated but there can be bigger increments than 100ns between two subsequent readings (and no increment). A double floating point number has an exponent of 11 bits and 52 fractional bits. This means it can represent about 2^53 seconds or some 285 million years before its resolution gets higher than one second. Scale down accordingly to 285 000 years for 1 ms resolution and still 285 years for 1us resolution.1 point
-
Well I referred to the VI names really, the ZLIB Inflate calls the compress function, which then calls internally the inflate_init, inflate and inflate_end functions, and the ZLIB Deflate calls the decompress function wich calls accordingly deflate_init, deflate and deflate_end. The init, add, end functions are only useful if you want to process a single stream in junks. It's still only one stream but instead of entering the whole compressed or uncompressed stream as a whole, you initialize a compression or decompression reference, then add the input stream in smaller junks and get every time the according output stream. This is useful to process large streams in smaller chunks to save memory at the cost of some processing speed. A stream is simply a bunch of bytes. There is not inherent structure in it, you would have to add that yourself by partitioning the junks accordingly yourself.1 point
-
1 point
-
You could also check https://github.com/ISISSynchGroup/mjpeg-reader which provides a .Net solution (not tried). So, who volunteers for something working on linux?1 point
-
The thing I loved about the original LabVIEW was that it was not namespaced or partitioned. You could run an executable and share variables without having to use things like memory maps. I used to to have a toolbox of executables (DVM, Power Supplies, oscilloscopes, logging etc. ) and each test system was just launching the appropriate executable[s] at the appropriate times. It was like OOP composition for an entire test system but with executable modules. Additionally, crashes were unheard of. In the 1990's I think I had 1 insane object in 18 months and didn't know what a GPF fault was until I started looking at other languages. We could run out of memory if we weren't careful though (remember the Bulldozer?). Progress!1 point
-
I haven't had much time to investigate this until this month, but I think I've found the cause. XNodes on the production computer were not designed optimally. In the AdaptToInputs ability I was unconditionally passing a GenerateCode reply, thinking that the AdaptToInputs is only called when interacting with the XNode (connecting/disconnecting wires). It turned out that LabVIEW also calls the AdaptToInputs ability once, when the VIs are loaded and any single change is made, no matter if it touches the XNode or not. As I had many such non-optimal XNodes in many places, it was causing code regeneration in all of them. Besides of that some of my VIs had very high code complexity (11 to 13), because of a bunch of nested structures. When the XNodes regeneration was occurring simultaneously with the VIs recompilation, it was taking that a minute or so. After I added extra conditions into my AdaptToInputs ability (issue a GenerateCode reply only, when the Term Types are changed), the edits in my VIs started to take 1.5 seconds. Still the hierarchy saves can be slow, when some 'heavy' VIs are changed, but it's a task for me to refactor those VIs, so their complexity could decrease to 10 or less. By the way, my example from the previous page was not suitable for demonstrating the situation, as its code complexity is low and the Match Regular Expression XNode does not issue a GenerateCode reply in the AdaptToInputs.1 point
-
1 point
-
You might have more success posting this on the Discord. Most of the conversations happen there these days.1 point
-
1 point
-
Hi My advice for managing multiple versions of LabVIEW is always the same : >>> Install only one LabVIEW version per partition if you also need to install any driver, toolkit or module. Or need other software that integrates with LabVIEW in some way. No exceptions. I do have VMWare installed with Windows XP to be able to open ancient LabVIEW versions like 6.1 or read the old CHM help files, accepting the sluggish performance of the VM environment. I avoid using it for anything 'serious'. To manage the span between LabVIEW 2018 and 2024 I would divide the disk into two partitions and install two copies of Windows and then install LabVIEW. To manage multiple partitions and selecting which to boot from by default, I recommend installing EasyBCD. But you don't have to. Windows creates a simple multiboot menu itself. There are other options too. But they require some dedication going into the art of multiboot management. ¤ You can install Windows on an external USB3 connected disk, SSD or FlashDisk. Microsoft abandoned the concept in 2020. But a program called Rufus revived the concept and now there are many tools that gives this as an opportunity. Works splendidly even with Windows 11. ¤ Some laptops ( and desktops of course ) support easy change of the disk. Sometimes using a replaceable disk craddle instead of the DVD drive. Good luck1 point
-
1 point
-
There is an Application property called Display->All Monitors. It will give you the pixel ranges of the monitors in your system. What I've done is to use the calling VI's position to figure out which monitor it was on and then place the new VI window as needed. You could use a win32 dll call to get the mouse position as well if that better meets your requirements.1 point
-
Yup. There is: MMAP (1.0.1).1 point
-
Drat, and now my typos and errors are put in stone for eternity (well at least until LavaG is eventually shutdown when the last person on earth turns off the light) 😁1 point
-
@Rolf Kalbermatter the admins removed that setting for you as everything you say should be written down and never deleted 🙂1 point
-
I kind of liked this idea and wished VIM's could allow for such a backpropagation. Even had a thought of making an idea on the dark forums. But then I played a while with the Variant To Data node. It doesn't play well. It can't determine a sink, if a polymorphic VI is connected or even when a LV native (yellow) node is connected. Borders of structures are another issue, obviously. So, it'd require making two ideas at least: to implement VIM backpropagation and to enhance the Variant To Data node. (As a hack one could eliminate the Variant to Data in their code with coerceFromVariant=TRUE token, but then the diagram starts to look odd and no error handling is performed). If someone still wants the code, shown in the very first post, it's here: https://code.google.com/archive/p/party-licht-steuerung/source/default/source?page=3 (\trunk\PLS-Code\PLS Main.vi). And these are the papers to progress through the lessons: LabVIEW Intermediate I Successful Development Practices Course Manual. Nothing interesting there for an experienced LV'er though. XNodes demonstrated here work a way better, and could be a good alternative (if you're OK with unsupported features, of course). As I tried to adapt them for my own purposes, I decided to improve the sink search technique. It surprised me a bit, that there's still no complete code to walk through all the nested structures to determine a source/sink by its wire. Maybe I didn't search well but all I found was this popup plugin: Find Wire Source.llb. It stops on Case structures though. I have reversed its logic to search for a sink instead of a source and tried to apply recursion, when it encounters a Case structure. Well, it's still not ideal, but now it works in most my cases. There are some cases, when it cannot find a sink, e.g. wire branches with void terms: Too many scenarios to process them all. Nevertheless, this little VI might be useful for someone. You may use it as a popup plugin, of course, or may pull out that Execute Find Wire Destination (R).vi and use it in your XNodes. As an example: Find Wire Destination.llb Already tried such nodes in a work project. I must admit that not all the time back-propagation is suitable, so about 50/50. But when it's used, it works.1 point
-
1 point
-
I have experienced the same thing when my VI was the member of a large class. I removed the VI from the class, set the splitter positions, and then added it back to the class. :shrug:1 point
-
Just to share how I got around this: By deleting 1 front panel item at a time I found that one single control was causing PaneRelief to crash; an XY graph. Setting it temporarily to not scale and replacing it with a standard XY graph (the one I had had some colours set to transparent etc) was enough to avoid having PaneRelief crash LabVIEW, but it would now just present a timeout error: I found a way arund this too though: the VI in question was member of a DQMH lvlib that probably added a lot of complexity for PaneRelief. With a copy saved as a non-member it worked: I could replace the graph, edit the splitters with PaneRelief without the timeout error (even setting the size to 0), then copy back the original graph replacing the temporary one, and finally move the copy back into the lvlib and swap it with the original. Voila! What a Relief... 😉 I probably have to repeat this whole ordeal if I ever need to readjust the splitters in that VI with PaneRelief though 😮1 point
-
1 point
-
MAT files are now just H5 files(HDF). Look at the library https://h5labview.sourceforge.io/ and find the example for writing a MAT file. You just need to add a special header in the beginning. I assume the dlls needed will work on Windows server, but am not sure.1 point
-
Here is a VI that gets the title of the window that is active. You could then continually loop until the title you expect is active, then perform operations. https://forums.ni.com/t5/LabVIEW/Get-Current-Active-Window/m-p/3930389#M11169261 point
-
1 point
-
Here is a quick and dirty edit. It allows for column separators to be moved, but I noticed that on resize it will set the column widths. So this means if you manually move the columns, and then resize the control it may change the columns in an unexpected way. But at that point you can manually move the separators again. I only have 2017 and 2018 so this is for 2017 and newer now. Variant_Probe-2.4.3-0.ogp1 point
-
Well that's okay I felt like doing some improvements on the image manipulation code. Attached is an improved version that supports ico and tif files and allows to select an image from within the file. For ico files it basically grabs the one image you select (with Image Index) and make an array of bytes that is a ico file with only that image in it, and then displays it in the picture box. For Tif files there is a .Net method for selecting the image which for some reason doesn't work on ico files. Edit: Updated to work with Tifs as well. Image Manipulation With Ico and Tif.zip1 point
-
It's easy, there is probably a vi with that name in memory, so if you would remove the class prefix there would be a conflict. Rename the vi first to something unique and the try to delete it.1 point
-
I used scripting and low-level VI editing to generate a VI with every single decoration object in LabVIEW, at least those with ID's 0 to -4096. There may be some out of that range (and many in that range don't have a valid image associated with them) but this range contains a lot of them. 0 to -4096.vi1 point
-
Looks like someone beat me to it! Oh well, I already exported it (also for 2009, incidentally) so I'll post it here in case it'd be more convenient to use a regular VI file. 0 to -4096.vi1 point
-
I think I have this fixed. Tried a compiled version of the transport library; this worked without issue. I made the timeout changes, which did not seem to have an impact on performance. I then recompiled everything (the server-side code is used in multiple applications on the system); the delay I was seeing with the one message/response went to expected amounts in tests. I've been waiting to test this with the whole system up and going; unfortunately, we've been battling drive issues that are stopping everything else. Can't definitively say it's fixed. Can't point to a smoking gun. I'm appreciating this forum and the people on it right now.1 point
-
Basically you need 2 more Property nodes if you want to keep your headers color. you must do what QueueYueue said first. Then : Active Cell.Active Column Number = -2 (this selects all columns) Active Item.Row Number = -1 (this selects the column headers) Active Cell.Background Color = Desired color Then : Active Cell.Active Column Number = -1 (this selects row header) Active Item.Row Number = -2 (this selects all rows) Active Cell.Background Color = Desired color1 point
-
Hi All, I'm looking for a good resource that explains LabVIEW's Execution System / Thread Allocation / Thread Priority system. As a background to the reason for my request: I have an application with over 50 parallel loops running at fixed but configurable times. Twenty of these loops are calling a .net Dll and are thus not in a Timed Loop (there is a known issue according to NI support with calling a .net dll in a timed loop where the call time is large ie. upwards of a second). The remaining loops are performing other data acquisition. Each loop is what I call a Task Controller - it looks after a specific piece of hardware, taking requests for data (via queues), performing data acquisition and then pumping the result back to the requester. In order to seperate the timing of the functionality (and allow multiple requesters access to the same data), this process is not sequential but occurs in parallel loops. So there is a lot of parallel activity going on. I notice that as more of these loops fire up, the slower the remaining loops are. The CPU usage tends to stay around 7-8% during this time irrespective of how many loops are executing. Note that the .net dll calls (up to 20) are reasonably slow calls and each could take up to 6 seconds to execute. The .net dll has been written to handle multi-threading. The PC is a hyper-threaded quad core (ie 8 logical cores) @ 3.3GHz. Kinda a meaty machine. I should also mention that the majority of the VIs are re-entrant. The only non-rentrant VIs are some FGVs and a few User Interface VIs that reference the data in these FGVs. And before you ask the FGVs are simply Get/Set for a handful of cluster points. So I figure it's a simple case of thread starvation. Every VI is currently set to the Standard Execution System (via Same as Caller) with Normal Priority. I figure that adjusting these settings on the top level Task Controller vis may assist in spreading the load to the remaining available, but not executing, threads. The SubVIs under each Task Controller will continue to use the Same As Caller setting, allowing me to seperate logically each Task into appropriate Execution Systems. Any thoughts?1 point
-
Start with NI's article "How Many Threads Does LabVIEW Allocate?" Also see the LabVIEW help for "Multitasking, Multithreading, and Multiprocessing." If you divide your code across execution systems instead of leaving them all at "Same as Caller" you should see the work distributed across more threads, and probably better performance and higher CPU use.1 point
-
I think that's what the function does behind the scene. A rectangle is simply one case of any number of geometries you can make with this function's inputs. NI Vision rotation algorithm is more complete because it will interpolate colours when the rotated pixel positions are not integers, but otherwise it's the same. The rotation matrix in 2D is exactly what you state above. Rotation of points.vi1 point
-
Sweet! That solves it. So, now we can write a LabVIEW console app! Here is the VI that let's you write to the StdOut of the calling console: Write to StdOut of Calling Parent.vi -John1 point
