Leaderboard
Popular Content
Showing content with the highest reputation since 05/02/2025 in Posts
-
So a couple of years ago I was reading about the ZLIB documentation on compression and how it works. It was an interesting blog post going into how it works, and what compression algorithms like zip really do. This is using the LZ77 and Huffman Tables. It was very education and I thought it might be fun to try to write some of it in G. The deflate function in ZLIB is very well understood from an external code call and so the only real ever so slight place that it made sense in my head was to use it on LabVIEW RT. The wonderful OpenG Zip package has support for Linux RT in version 4.2.0b1 as posted here. For now this is the version I will be sticking with because of the RT support. Still I went on my little journey trying to make my own in pure LabVIEW to see what I could do. My first attempt failed immensely and I did not have the knowledge, to understand what was wrong, or how to debug it. As a test of AI progression I decided to dig up this old code and start asking AI about what I could do to improve my code, and to finally have it working properly. Well over the holiday break Google Gemini delivered. It was very helpful for the first 90% or so. It was great having a dialog with back and forth asking about edge cases, and how things are handled. It gave examples and knew what the next steps were. Admittedly it is a somewhat academic problem, and so maybe that's why the AI did so well. And I did still reference some of the other content online. The last 10% were a bit of a pain. The AI hallucinated several times giving wrong information, or analyzed my byte streams incorrectly. But this did help me understand it even more since I had to debug it. So attached is my first go at it in 2022 Q3. It requires some packages from VIPM.IO. Image Manipulation, for making some debug tree drawings which is actually disabled at the moment. And the new version of my Array package 3.1.3.23. So how is performance? Well I only have the deflate function, and it only is on the dynamic table, which only gets called if there is some amount of data around 1K and larger. I tested it with random stuff with lots of repetition and my 700k string took about 100ms to process while the OpenG method took about 2ms. Compression was similar but OpenG was about 5% smaller too. It was a lot of fun, I learned a lot, and will probably apply things I learned, but realistically I will stick with the OpenG for real work. If there are improvements to make, the largest time sink is in detecting the patterns. It is a 32k sliding window and I'm unsure of what techniques can be used to make it faster. ZLIB G Compression.zip5 points
-
Phew that is a pretty strong opinion! Although I personally am not a fan of the overall style of DQMH none of my problems are with the scripting/wizards or placeholder text. I think any framework that tries to do "a lot" will be complicated... your own personal framework (which you likely find trivial to use) is likely to be a bit weird to others. DQMH is extremely popular for a reason... To paraphrase the words of a wiser person than I, "please don't yuck someone elses yum"3 points
-
It is not that LabVIEW MAY unregister the reference, but that it WILL unregister the reference as soon as the top level VI in whose hierarchy the reference was created goes idle. This is by design and the only way to prevent that is to either keep that hierarchy active until any other user of that refnum has finished or delegate creating of the refnum to the place where it is needed, for instance through a LV2 style global maintaining the reference in a shift register and when being called for the first time it will create the refnum if the shift register contains an invalid refnum. True Actor Framework design kind of mandates that all refnums are created in the context of where they are used not some other global instance that may or may not keep running for the time some Actor is using the refnum.2 points
-
Hi everyone, Just want to share our open source project "Labview Python Bridge". Connect labview apps with python apps in realtime with multi-processing data queues. https://github.com/jmor2000/labview_python_bridge If anyone has any questions or suggestions for new developments / features, let me know. Cheers Jeff2 points
-
Are you seriously expecting anyone to install a random executable on their system from an unknown publisher, provided by an anonymous person on the web, where one can't even get a proper link in Google to the actual company page? Sorry, but anyone doing that should not be allowed near 5m of a computer system!2 points
-
Absolutely echo what Shaun says. Nobody banned them. But most who tried to use them have after some more or less short time run from them, with many hairs ripped out of their head, a few nervous tics from to much caffeine consume and swearing to never try them again. The idea is not really bad and if you are willing to suffer through it you can make pretty impressive things with them, but the execution of that idea is anything but ideal and feels in many places like a half thought out idea that was eventually abandoned when it was kind of working but before it was a really easily usable feature.2 points
-
Seems like this one has "escaped everyone's grasp" too. ParallelLoop.ShowAllSchedules=True Because was only checked from the password-protected diagram of ParallelForLoopDialog.vi (LabVIEW 20xx\resource\dialog). Present since LabVIEW 2010. When activated, allows to apply more advanced iteration partitioning schedule. In other words, instead of this you will get this Сould this be useful? I can't say. Maybe in some very specific use-cases. In my quick tests I didn't manage to get increase in any productivity. It's easy to mess up with those options and make things worse, than by default. Also can be changed by this scripting counterpart.2 points
-
Look at this new download on VIPM https://www.vipm.io/package/bjm_lib_request_power/2 points
-
You want an ability to override the Equality or Comparison operators? I'm unsure, whether it really existed in OpenG packages, but now you have those neat malleable VIs, that let you do that: Search Unsorted 1D Array , Sort 1D Array , Search Sorted 1D Array. They have an additional input to specify your own equals or less function in a form of a custom comparison class or a VI refnum. There's an article to help: Creating a Custom Sorting Function in LabVIEW2 points
-
This is exactly what was said in that ancient thread: Tree control in labview. So if you add 65536*N to the Item Symbols property of the Listbox and have the "Enable Indentation" option activated, you shift the symbol/glyph and the text N levels to the right. Could be useful for simple 'parent-child' relationships, if you don't want to use a Tree. And still it's used in Find Examples / NI Example Finder window:2 points
-
I once went for an interview where they gave me a coding test and asked me to modify it. It was a very long time ago so I don't remember the exact modification they wanted (nothing to do with memory leaks) but I do remember the obtain queue and read queue inside a while loop with the release queue outside. I asked if they wanted me to also fix the memory leak as well as the modifications and they were a little puzzled until I explained what you have just said. I must have seen (and fixed) this while-loop bug-pattern a thousand times since then in various code bases. I also created this VI which I generally use instead of the primitives as it intialises on first call, can be called from anywhere, and prevents most foot-shooting by rolling them all into a single VI and ensuring all references but 1 are closed after use. Queue.vi2 points
-
2 points
-
In the past I have used the IMAQ drivers for getting the image, which on its own does not require any additional runtime license. It is one of those lesser known secrets that acquiring and saving the image is free, but any of the useful tools have a development, and deployment license associated with it. I've also had mild success with leveraging VLC. Here is the library I used in the past, and here is another one I haven't used but looks promising. With these you can have a live stream of a camera as long as VLC can talk to it, and then pretty easily save snapshots. EDIT: The NI software for getting images through IMAQ for free is called "NI Vision Common Resources". This LAVA thread is where I first learned about it.2 points
-
Just to share how I got around this: By deleting 1 front panel item at a time I found that one single control was causing PaneRelief to crash; an XY graph. Setting it temporarily to not scale and replacing it with a standard XY graph (the one I had had some colours set to transparent etc) was enough to avoid having PaneRelief crash LabVIEW, but it would now just present a timeout error: I found a way arund this too though: the VI in question was member of a DQMH lvlib that probably added a lot of complexity for PaneRelief. With a copy saved as a non-member it worked: I could replace the graph, edit the splitters with PaneRelief without the timeout error (even setting the size to 0), then copy back the original graph replacing the temporary one, and finally move the copy back into the lvlib and swap it with the original. Voila! What a Relief... 😉 I probably have to repeat this whole ordeal if I ever need to readjust the splitters in that VI with PaneRelief though 😮2 points
-
My part of the world! Unfortunately, I don't have any connections to the realm you are talking about. But I would look at Northern Kentucky University (just across the Ohio river) as they have a good CS program (or at least used to). There is also Miami University (not to be confused with the one in Florida), Wright State University (where I got my MSE), and University of Dayton about an hour north of here in the Dayton area. University of Kentucky is about 2 hours south (Lexington, Kentucky), which is about the same distance from Ohio University (Athens) or THE Ohio State University (Columbus).1 point
-
@Rolf Kalbermatter my team and I still use in some systems here! In fact this very last week we have needed to add some lua stuff to an old project.1 point
-
You can 'renew' your LabVIEW CE license this way if you haven't tried it: Go to https://www.ni.com Hover over your user icon in the upper right and select "My Account". Scroll down to "Products and Services" and select "View my products". Scroll down to find your LabVIEW Community Edition and select "Renew" from the drop-down menu to the right. Apologies if you've already tried this and still had issues. Maybe someone else will find it useful.1 point
-
1 point
-
1 point
-
Well I referred to the VI names really, the ZLIB Inflate calls the compress function, which then calls internally the inflate_init, inflate and inflate_end functions, and the ZLIB Deflate calls the decompress function wich calls accordingly deflate_init, deflate and deflate_end. The init, add, end functions are only useful if you want to process a single stream in junks. It's still only one stream but instead of entering the whole compressed or uncompressed stream as a whole, you initialize a compression or decompression reference, then add the input stream in smaller junks and get every time the according output stream. This is useful to process large streams in smaller chunks to save memory at the cost of some processing speed. A stream is simply a bunch of bytes. There is not inherent structure in it, you would have to add that yourself by partitioning the junks accordingly yourself.1 point
-
1 point
-
You could also check https://github.com/ISISSynchGroup/mjpeg-reader which provides a .Net solution (not tried). So, who volunteers for something working on linux?1 point
-
The thing I loved about the original LabVIEW was that it was not namespaced or partitioned. You could run an executable and share variables without having to use things like memory maps. I used to to have a toolbox of executables (DVM, Power Supplies, oscilloscopes, logging etc. ) and each test system was just launching the appropriate executable[s] at the appropriate times. It was like OOP composition for an entire test system but with executable modules. Additionally, crashes were unheard of. In the 1990's I think I had 1 insane object in 18 months and didn't know what a GPF fault was until I started looking at other languages. We could run out of memory if we weren't careful though (remember the Bulldozer?). Progress!1 point
-
I haven't had much time to investigate this until this month, but I think I've found the cause. XNodes on the production computer were not designed optimally. In the AdaptToInputs ability I was unconditionally passing a GenerateCode reply, thinking that the AdaptToInputs is only called when interacting with the XNode (connecting/disconnecting wires). It turned out that LabVIEW also calls the AdaptToInputs ability once, when the VIs are loaded and any single change is made, no matter if it touches the XNode or not. As I had many such non-optimal XNodes in many places, it was causing code regeneration in all of them. Besides of that some of my VIs had very high code complexity (11 to 13), because of a bunch of nested structures. When the XNodes regeneration was occurring simultaneously with the VIs recompilation, it was taking that a minute or so. After I added extra conditions into my AdaptToInputs ability (issue a GenerateCode reply only, when the Term Types are changed), the edits in my VIs started to take 1.5 seconds. Still the hierarchy saves can be slow, when some 'heavy' VIs are changed, but it's a task for me to refactor those VIs, so their complexity could decrease to 10 or less. By the way, my example from the previous page was not suitable for demonstrating the situation, as its code complexity is low and the Match Regular Expression XNode does not issue a GenerateCode reply in the AdaptToInputs.1 point
-
I have always used this library to prevent the screensaver and windows lock from occurring. Our IT locks down the computer so the screensaver, lock screen, cannot be changed. This library bascially tells Windows it's in Presentation mode, e.g., slideshow, watching a movie, etc, such that the screen will not got to screensaver or lock screen.1 point
-
1 point
-
You might have more success posting this on the Discord. Most of the conversations happen there these days.1 point
-
I only switched to Win10 3 years ago from Win 7 and that was only because I wanted encrypted SMB to my NAS. I'll think about desktop Linux when they fix their application distribution methods . I dropped my Linux LabVIEW product support for a reason->my products broke every time someone else updated their product.1 point
-
1 point
-
There is an Application property called Display->All Monitors. It will give you the pixel ranges of the monitors in your system. What I've done is to use the calling VI's position to figure out which monitor it was on and then place the new VI window as needed. You could use a win32 dll call to get the mouse position as well if that better meets your requirements.1 point
-
1 point
-
Yup. There is: MMAP (1.0.1).1 point
-
1 point
-
I cannot look at your file, but I suggest save the data to TDMS or any binary format of your choice. Once the file is saved, then you can convert it to text.1 point
-
@Rolf Kalbermatter the admins removed that setting for you as everything you say should be written down and never deleted 🙂1 point
-
If the child classes are statically linked in the code (via class constants, or whatever other mechanism you use), then this approach should always work, because the child classes will always be in memory.1 point
-
Had the same issue. Removing a VI from a lvlib, using Pane Relief to set splitter size and moving it back to the the lvlib worked for me to. Thanks!1 point
-
1 point
-
1 point
-
There are several alternatives for the NI GPU Toolkit that are considerably more up to date and actually still maintained. https://www.ngene.co/gpu-toolkit-for-labview https://www.g2cpu.com/1 point
-
Regarding Levenshtein: Wladimir Levenshtein developed 1995 an algorithm for this. It is called the Levenshtein Distance. Some years ago I developed a VI to calculate the Levenshtein Distance. Here it is (LabVIEW 2016). Can you post your VIs in LV2020 or 2019, please. Levenshtein Distance.vi1 point
-
Here is a VI that gets the title of the window that is active. You could then continually loop until the title you expect is active, then perform operations. https://forums.ni.com/t5/LabVIEW/Get-Current-Active-Window/m-p/3930389#M11169261 point
-
1 point
-
Here is a quick and dirty edit. It allows for column separators to be moved, but I noticed that on resize it will set the column widths. So this means if you manually move the columns, and then resize the control it may change the columns in an unexpected way. But at that point you can manually move the separators again. I only have 2017 and 2018 so this is for 2017 and newer now. Variant_Probe-2.4.3-0.ogp1 point
-
I used scripting and low-level VI editing to generate a VI with every single decoration object in LabVIEW, at least those with ID's 0 to -4096. There may be some out of that range (and many in that range don't have a valid image associated with them) but this range contains a lot of them. 0 to -4096.vi1 point
-
Looks like someone beat me to it! Oh well, I already exported it (also for 2009, incidentally) so I'll post it here in case it'd be more convenient to use a regular VI file. 0 to -4096.vi1 point
-
I think I have this fixed. Tried a compiled version of the transport library; this worked without issue. I made the timeout changes, which did not seem to have an impact on performance. I then recompiled everything (the server-side code is used in multiple applications on the system); the delay I was seeing with the one message/response went to expected amounts in tests. I've been waiting to test this with the whole system up and going; unfortunately, we've been battling drive issues that are stopping everything else. Can't definitively say it's fixed. Can't point to a smoking gun. I'm appreciating this forum and the people on it right now.1 point
-
Mwuhahahahaha! Three config tokens have escaped your grasp! I modified them specifically for folks like Flarn! They don't appear as plain text anywhere in the EXE (or in any VI for that matter). Do they guard any great secret of LabVIEW? I'm not telling! But you can have fun pouring through the code and looking for interesting bits and trying to figure out what you need to put in your config file. LabVIEW 2013 or later. Good luck.1 point
-
Hi All, I'm looking for a good resource that explains LabVIEW's Execution System / Thread Allocation / Thread Priority system. As a background to the reason for my request: I have an application with over 50 parallel loops running at fixed but configurable times. Twenty of these loops are calling a .net Dll and are thus not in a Timed Loop (there is a known issue according to NI support with calling a .net dll in a timed loop where the call time is large ie. upwards of a second). The remaining loops are performing other data acquisition. Each loop is what I call a Task Controller - it looks after a specific piece of hardware, taking requests for data (via queues), performing data acquisition and then pumping the result back to the requester. In order to seperate the timing of the functionality (and allow multiple requesters access to the same data), this process is not sequential but occurs in parallel loops. So there is a lot of parallel activity going on. I notice that as more of these loops fire up, the slower the remaining loops are. The CPU usage tends to stay around 7-8% during this time irrespective of how many loops are executing. Note that the .net dll calls (up to 20) are reasonably slow calls and each could take up to 6 seconds to execute. The .net dll has been written to handle multi-threading. The PC is a hyper-threaded quad core (ie 8 logical cores) @ 3.3GHz. Kinda a meaty machine. I should also mention that the majority of the VIs are re-entrant. The only non-rentrant VIs are some FGVs and a few User Interface VIs that reference the data in these FGVs. And before you ask the FGVs are simply Get/Set for a handful of cluster points. So I figure it's a simple case of thread starvation. Every VI is currently set to the Standard Execution System (via Same as Caller) with Normal Priority. I figure that adjusting these settings on the top level Task Controller vis may assist in spreading the load to the remaining available, but not executing, threads. The SubVIs under each Task Controller will continue to use the Same As Caller setting, allowing me to seperate logically each Task into appropriate Execution Systems. Any thoughts?1 point
-
I think that's what the function does behind the scene. A rectangle is simply one case of any number of geometries you can make with this function's inputs. NI Vision rotation algorithm is more complete because it will interpolate colours when the rotated pixel positions are not integers, but otherwise it's the same. The rotation matrix in 2D is exactly what you state above. Rotation of points.vi1 point
