Leaderboard
Popular Content
Showing content with the highest reputation since 02/06/2025 in all areas
-
So a couple of years ago I was reading about the ZLIB documentation on compression and how it works. It was an interesting blog post going into how it works, and what compression algorithms like zip really do. This is using the LZ77 and Huffman Tables. It was very education and I thought it might be fun to try to write some of it in G. The deflate function in ZLIB is very well understood from an external code call and so the only real ever so slight place that it made sense in my head was to use it on LabVIEW RT. The wonderful OpenG Zip package has support for Linux RT in version 4.2.0b1 as posted here. For now this is the version I will be sticking with because of the RT support. Still I went on my little journey trying to make my own in pure LabVIEW to see what I could do. My first attempt failed immensely and I did not have the knowledge, to understand what was wrong, or how to debug it. As a test of AI progression I decided to dig up this old code and start asking AI about what I could do to improve my code, and to finally have it working properly. Well over the holiday break Google Gemini delivered. It was very helpful for the first 90% or so. It was great having a dialog with back and forth asking about edge cases, and how things are handled. It gave examples and knew what the next steps were. Admittedly it is a somewhat academic problem, and so maybe that's why the AI did so well. And I did still reference some of the other content online. The last 10% were a bit of a pain. The AI hallucinated several times giving wrong information, or analyzed my byte streams incorrectly. But this did help me understand it even more since I had to debug it. So attached is my first go at it in 2022 Q3. It requires some packages from VIPM.IO. Image Manipulation, for making some debug tree drawings which is actually disabled at the moment. And the new version of my Array package 3.1.3.23. So how is performance? Well I only have the deflate function, and it only is on the dynamic table, which only gets called if there is some amount of data around 1K and larger. I tested it with random stuff with lots of repetition and my 700k string took about 100ms to process while the OpenG method took about 2ms. Compression was similar but OpenG was about 5% smaller too. It was a lot of fun, I learned a lot, and will probably apply things I learned, but realistically I will stick with the OpenG for real work. If there are improvements to make, the largest time sink is in detecting the patterns. It is a 32k sliding window and I'm unsure of what techniques can be used to make it faster. ZLIB G Compression.zip5 points
-
Phew that is a pretty strong opinion! Although I personally am not a fan of the overall style of DQMH none of my problems are with the scripting/wizards or placeholder text. I think any framework that tries to do "a lot" will be complicated... your own personal framework (which you likely find trivial to use) is likely to be a bit weird to others. DQMH is extremely popular for a reason... To paraphrase the words of a wiser person than I, "please don't yuck someone elses yum"3 points
-
Seems like this one has "escaped everyone's grasp" too. ParallelLoop.ShowAllSchedules=True Because was only checked from the password-protected diagram of ParallelForLoopDialog.vi (LabVIEW 20xx\resource\dialog). Present since LabVIEW 2010. When activated, allows to apply more advanced iteration partitioning schedule. In other words, instead of this you will get this Сould this be useful? I can't say. Maybe in some very specific use-cases. In my quick tests I didn't manage to get increase in any productivity. It's easy to mess up with those options and make things worse, than by default. Also can be changed by this scripting counterpart.2 points
-
Look at this new download on VIPM https://www.vipm.io/package/bjm_lib_request_power/2 points
-
You want an ability to override the Equality or Comparison operators? I'm unsure, whether it really existed in OpenG packages, but now you have those neat malleable VIs, that let you do that: Search Unsorted 1D Array , Sort 1D Array , Search Sorted 1D Array. They have an additional input to specify your own equals or less function in a form of a custom comparison class or a VI refnum. There's an article to help: Creating a Custom Sorting Function in LabVIEW2 points
-
This is exactly what was said in that ancient thread: Tree control in labview. So if you add 65536*N to the Item Symbols property of the Listbox and have the "Enable Indentation" option activated, you shift the symbol/glyph and the text N levels to the right. Could be useful for simple 'parent-child' relationships, if you don't want to use a Tree. And still it's used in Find Examples / NI Example Finder window:2 points
-
I once went for an interview where they gave me a coding test and asked me to modify it. It was a very long time ago so I don't remember the exact modification they wanted (nothing to do with memory leaks) but I do remember the obtain queue and read queue inside a while loop with the release queue outside. I asked if they wanted me to also fix the memory leak as well as the modifications and they were a little puzzled until I explained what you have just said. I must have seen (and fixed) this while-loop bug-pattern a thousand times since then in various code bases. I also created this VI which I generally use instead of the primitives as it intialises on first call, can be called from anywhere, and prevents most foot-shooting by rolling them all into a single VI and ensuring all references but 1 are closed after use. Queue.vi2 points
-
2 points
-
In the past I have used the IMAQ drivers for getting the image, which on its own does not require any additional runtime license. It is one of those lesser known secrets that acquiring and saving the image is free, but any of the useful tools have a development, and deployment license associated with it. I've also had mild success with leveraging VLC. Here is the library I used in the past, and here is another one I haven't used but looks promising. With these you can have a live stream of a camera as long as VLC can talk to it, and then pretty easily save snapshots. EDIT: The NI software for getting images through IMAQ for free is called "NI Vision Common Resources". This LAVA thread is where I first learned about it.2 points
-
Just to share how I got around this: By deleting 1 front panel item at a time I found that one single control was causing PaneRelief to crash; an XY graph. Setting it temporarily to not scale and replacing it with a standard XY graph (the one I had had some colours set to transparent etc) was enough to avoid having PaneRelief crash LabVIEW, but it would now just present a timeout error: I found a way arund this too though: the VI in question was member of a DQMH lvlib that probably added a lot of complexity for PaneRelief. With a copy saved as a non-member it worked: I could replace the graph, edit the splitters with PaneRelief without the timeout error (even setting the size to 0), then copy back the original graph replacing the temporary one, and finally move the copy back into the lvlib and swap it with the original. Voila! What a Relief... 😉 I probably have to repeat this whole ordeal if I ever need to readjust the splitters in that VI with PaneRelief though 😮2 points
-
I confirm that this license is nearly identical to the standard EULA we use for our commercial products. Some wording is not applicable to a distributed palette of VIs like this. Our intention was to share a few reusable tools, used internally, with the community. Ideally, we should have released them under a standard open-source license such as MIT or a similar option. These VIs have been released “as-is,” without support or any guarantee that they will function for your specific use case. You may need to troubleshoot or fix any issues on your own. Feel free to use them in any context. I’ll look into whether it's possible to update the packages on the tool network to replace the current license with a more standard open-source one.2 points
-
I put a temporary ban on inserting external links in posts (except from a safe list). We'll see what affect it has.2 points
-
2 points
-
Your reporting of spam is helpful. And just like you are doing one report per user is enough since I ban the user and all their posts are deleted. If spam gets too frequent I notify Michael and he tweaks dials behind the scene to try to help. This might be by looking at and temporarily banning new accounts from IP blocks, countries, or banning key words in posts. He also will upgrade the forum's platform tools occasionally and it gets better at detecting and rejecting spam.2 points
-
2 points
-
You can 'renew' your LabVIEW CE license this way if you haven't tried it: Go to https://www.ni.com Hover over your user icon in the upper right and select "My Account". Scroll down to "Products and Services" and select "View my products". Scroll down to find your LabVIEW Community Edition and select "Renew" from the drop-down menu to the right. Apologies if you've already tried this and still had issues. Maybe someone else will find it useful.1 point
-
1 point
-
Well I referred to the VI names really, the ZLIB Inflate calls the compress function, which then calls internally the inflate_init, inflate and inflate_end functions, and the ZLIB Deflate calls the decompress function wich calls accordingly deflate_init, deflate and deflate_end. The init, add, end functions are only useful if you want to process a single stream in junks. It's still only one stream but instead of entering the whole compressed or uncompressed stream as a whole, you initialize a compression or decompression reference, then add the input stream in smaller junks and get every time the according output stream. This is useful to process large streams in smaller chunks to save memory at the cost of some processing speed. A stream is simply a bunch of bytes. There is not inherent structure in it, you would have to add that yourself by partitioning the junks accordingly yourself.1 point
-
1 point
-
1 point
-
I haven't had much time to investigate this until this month, but I think I've found the cause. XNodes on the production computer were not designed optimally. In the AdaptToInputs ability I was unconditionally passing a GenerateCode reply, thinking that the AdaptToInputs is only called when interacting with the XNode (connecting/disconnecting wires). It turned out that LabVIEW also calls the AdaptToInputs ability once, when the VIs are loaded and any single change is made, no matter if it touches the XNode or not. As I had many such non-optimal XNodes in many places, it was causing code regeneration in all of them. Besides of that some of my VIs had very high code complexity (11 to 13), because of a bunch of nested structures. When the XNodes regeneration was occurring simultaneously with the VIs recompilation, it was taking that a minute or so. After I added extra conditions into my AdaptToInputs ability (issue a GenerateCode reply only, when the Term Types are changed), the edits in my VIs started to take 1.5 seconds. Still the hierarchy saves can be slow, when some 'heavy' VIs are changed, but it's a task for me to refactor those VIs, so their complexity could decrease to 10 or less. By the way, my example from the previous page was not suitable for demonstrating the situation, as its code complexity is low and the Match Regular Expression XNode does not issue a GenerateCode reply in the AdaptToInputs.1 point
-
I don't do Discord. I don't even do Ni.com. Feedback isn't really necessary. I only knocked it up because I went down a rabbit hole and wasn't impressed with the existing LabVIEW solutions. I thought I'd throw it in here to see if someone could improve it. My solution is optimised but there may have been a better alternative solution or maybe someone had a nice JPEG one (LSB doesn't survive JPEG compression). You might get a mention in the readme just for responding1 point
-
You may also want tell people where you can actually download or at least buy this. Although if you want to sell it, do not expect to many reactions. It is already hard to get people to use such toolkits when you offer them for free download.1 point
-
I would suggest rabbitmq, i want(ed) to present it at a LabVIEW user group (LUGE) but haven't done it yet. It's very powerful. I use redis and did a quick presentation (in french) at LUGE recently, i haven't used the stream feature though, I only used it as cache.1 point
-
There is an Application property called Display->All Monitors. It will give you the pixel ranges of the monitors in your system. What I've done is to use the calling VI's position to figure out which monitor it was on and then place the new VI window as needed. You could use a win32 dll call to get the mouse position as well if that better meets your requirements.1 point
-
1 point
-
I've just spent an hour arguing with an LLM (Deepcoder). TL;DR A.I. is useless at programming. I had a bug. I'd spent about an hour trying to figure it out and not succeeding but it shouldn't be that hard-I'm just missing something obvious. So. Ideal scenario for a clever AI to show dominance and help out a poor old flesh-bag programmer, right? Just point out the mistake or mistakes and laugh at my stupidity like a real coder. The bug was that the address from recvfrom would be 0.0.0.0 instead of 127.0.0.1. The problem was either bind wasn't binding to a specific address (which it was supposed to) or the address translation was not working quite right. I gave it 3 functions where I thought the bug was and explained that IPv6 seemed to be reporting correctly but IPv4 was in error: one function (Listen) had the bind function one function had IPv6 and IPv4 address translation from a SOCKADDR_STORAGE structure to strings and the other function was the read function with the recvfrom. This is an abbreviation of the conversion: - Round one. The following code has a bug that returns 0.0.0.0 as the IPv4 address from "recvfrom" when it should be returning "127.0.0.1". Find the reason that the code is returning 0.0.0.0 instead of 127.0.0.1 and rewrite the code to fix the issue. It told me how to bind to a specific address and how to use the recvfrom function. It then basically wrote the example on the MS page for recvfrom but within the Listen function. - Round 2. I'm not using the "sockaddr_in" structures; I'm using SOCKADDR_STORAGE structures. Please modify your example to use SOCKADDR_STORAGE structures. It apologised made an excuse about misunderstanding and then proceeded to create an example, still using sockaddr_in structures, but this time hardcoding "127.0.0.1" in the bind address. - Round 3. No, no, no. It needs to support user entered IPv6 and IPv4 but there is a bug with the IPv4 address. It also needs to use SOCKADDR_STORAGE as in the code I supplied to you. Reevaluate the code and write a version of the code that fixes the bug. It apologised, said that it understands why it got it wrong, and then proceeded to create an example, still using sockaddr_in structures, but now hardcoding "::1" in the bind address. -Round 4. Look. It needs to support both IPv6 and IPv4 and that's the reason I'm using SOCKADDR_STORAGE. If you don't have enough information then ask for clarification but you have the code that the bug is in so find the damned bug FFS! Another apology, said it can understand my frustration and then proceeded to spit out the example from MS again. This went on for an hour. No code I could actually use in my functions, never pointed out the bug in my code and the prompts just got longer and longer as I tried to head-off it's stupidity. This was one of the functions. int Addr2Address(SOCKADDR_STORAGE addr, PCHAR Address, int *Port, int *IPvType) { int err = 0; *IPvType = 0; switch (addr.ss_family) { case AF_INET6: { if (Address == NULL) {return 46;} *IPvType = 2; char strAddress[46]; inet_ntop(addr.ss_family, (void*)&((sockaddr_in6 *)&addr)->sin6_addr, Address, sizeof(strAddress)); break; } case AF_INET: { if (Address == NULL) {return 16;} *IPvType = 1; char strAddress[16]; inet_ntop(addr.ss_family, (void*)&((sockaddr_in6 *)&addr)->sin6_addr, Address, sizeof(strAddress)); break; } default: {err = WSAEPROTONOSUPPORT; break;} } *Port = ntohs(((sockaddr_in6 *)&addr)->sin6_port); return err; } The bug is in the AF_INET case. inet_ntop(addr.ss_family, (void*)&((sockaddr_in6 *)&addr)->sin6_addr, Address, sizeof(strAddress)); It should not be an IPv6 address conversion, it should be an IPv4 conversion. That code results in a null for the address to inetop which is converted to 0.0.0.0. I found it after a good nights sleep and a fresh start.1 point
-
I cannot look at your file, but I suggest save the data to TDMS or any binary format of your choice. Once the file is saved, then you can convert it to text.1 point
-
There is no typos and errors in your posts. Only pearls of wisdom and oracles of truth that we mortals can't understand yet...1 point
-
@Rolf Kalbermatter the admins removed that setting for you as everything you say should be written down and never deleted 🙂1 point
-
Hello everyone, I developed an Addons-Toolkit of LabVIEW, which achieves most of the OpenCV's APIs. It includes more than 2700 VIs, covering 13/15 modules of OpenCV (except flann and gapi) . You can use it to control cameras, process images, run DNN models and so on. Welcome to my CSDN blog to download and give it a try! (Chargeable, 30 days trial) Requirements: Windows 10 or 11, LabVIEW>=2018, 32 or 64 bits.1 point
-
I kind of liked this idea and wished VIM's could allow for such a backpropagation. Even had a thought of making an idea on the dark forums. But then I played a while with the Variant To Data node. It doesn't play well. It can't determine a sink, if a polymorphic VI is connected or even when a LV native (yellow) node is connected. Borders of structures are another issue, obviously. So, it'd require making two ideas at least: to implement VIM backpropagation and to enhance the Variant To Data node. (As a hack one could eliminate the Variant to Data in their code with coerceFromVariant=TRUE token, but then the diagram starts to look odd and no error handling is performed). If someone still wants the code, shown in the very first post, it's here: https://code.google.com/archive/p/party-licht-steuerung/source/default/source?page=3 (\trunk\PLS-Code\PLS Main.vi). And these are the papers to progress through the lessons: LabVIEW Intermediate I Successful Development Practices Course Manual. Nothing interesting there for an experienced LV'er though. XNodes demonstrated here work a way better, and could be a good alternative (if you're OK with unsupported features, of course). As I tried to adapt them for my own purposes, I decided to improve the sink search technique. It surprised me a bit, that there's still no complete code to walk through all the nested structures to determine a source/sink by its wire. Maybe I didn't search well but all I found was this popup plugin: Find Wire Source.llb. It stops on Case structures though. I have reversed its logic to search for a sink instead of a source and tried to apply recursion, when it encounters a Case structure. Well, it's still not ideal, but now it works in most my cases. There are some cases, when it cannot find a sink, e.g. wire branches with void terms: Too many scenarios to process them all. Nevertheless, this little VI might be useful for someone. You may use it as a popup plugin, of course, or may pull out that Execute Find Wire Destination (R).vi and use it in your XNodes. As an example: Find Wire Destination.llb Already tried such nodes in a work project. I must admit that not all the time back-propagation is suitable, so about 50/50. But when it's used, it works.1 point
-
That's how I'd do it. Then combine that with the Foreign Key Sort from my Array package, putting the Time Stamps into the Keys, then paths into the Arrays, and it will sort the paths from oldest to newest. Reverse the array and index at 0, or use Delete From Array to get the last element, which would be the newest file.1 point
-
1 point
-
I actually had a similar experience when first moving everything to the new OpenG structure. It broke heaps of stuff (even inside its own OpenG stuff), so I rolled back the change. Some time later I tried again, and think I did have to deal with a bit of pain initially with relinking or maybe some missing stuff, but since then things have been stable.1 point
-
Put the acquire image and save to file in the event structure timeout case, but only write to file conditionally (i.e. if the user has clicked the button)1 point
-
Well regardless of the reason, what I was trying to say is that references opened in a VI, get closed when the VI that opens them goes idle.1 point
-
1 point
-
What doesn't work with the function: AJ_NETSDK_IPC_PTZControl() on page 21/22? Or are you not using the SDK functions to retrieve the RTSP stream but some other ready made interface for LabVIEW? Meaning you have no idea how to interface to a DLL? A few points to consider: 1) The camera may not like a secondary connection, either through the SDK or through generic TCP/IP while it is busy streaming image data to the VLC or whatever interface. 2) Trying to reverse engineer the TCP/IP binary stream protocol is likely going to be cumbersome and difficult to realize as it is usually proprietary. The SDK interface is simple enough to use, except if you lack any and all understanding about C programming. It's not a CIN node either that you will need to configure but a CLN (Call Library Node). CINs are not only legacy technology but on most modern LabVIEW versions simply not supported anymore. An interesting problem, but none I can help you as I do not have that hardware, and I would expect it to be a bit cumbersome considering above 2 points.1 point
-
To be honest, I always thought those should be in the Visible Items menu.1 point
-
To analyze large time series remotely I have a client application that splits the data transfer using two methods; It will break down the periods into subperiods that is then assembled into the full period by the receiver *and* if the subperiods are too large as well they are retrieved gradually (interlaced) by transferring decimated sets with a decimation offset. So, as a result of this I need to merge multiple overlapping fragments of time series (X and Y array sets). My base solution for this just concatinates two sets, sorts the result based on the time (X values) and finally removes duplicate samples (XY pairs) from it. This is simple to do with OpenG array VIs (sort 1D and remove duplicates VIs) or the improved VIM-versions from @hooovahh, but this is not optimal. My first optimized version runs through the two XY sets in a for-loop instead, a for loop that picks consecutive unique values from either of the two sets until there are no such entries left. This solution is typically 12-15 times faster than the base case (<10 ms to merge two sets into one set with 250 000 samples on my computer e.g.). Has anyone else made or seen a solution for this before? It is not as generic as the array operations covered by the OpenG array library e.g., but it still seems like something many people might need to do here and there...(I had VIs in my collection to stitch consecutive time series together with some overlap, but not any that handles interlacing as well). It would be interesting to see how optimized and/or generalized it could be solved. I have attached the two mentioned examples here , they are not thorougly tested yet, but just as a reference (VIMs included just in case...). Below is a picture of the front panel showing an example of the result they produce with two given XY sets (in this case overlapping samples do not share the same value, but this is done just to illustrate what Y value it has picked when boths sets have entries for the same X (time) value): Merging XY series LV2022.zip1 point
-
@Ravi Beniwal, I have the same problem, VIPM reports that missing dependency for me too. If it isn't required could it be removed ? If it is required could it be included ? I suspect it is included in version 1.7.028 which does not report the error. I spent(wasted !) time looking for the dependency and downloading it, only to realise it might not be needed as it launches OK from the LV IDE menu Tools\LabVIEW Task Maanger.... Peter1 point
-
Regarding Levenshtein: Wladimir Levenshtein developed 1995 an algorithm for this. It is called the Levenshtein Distance. Some years ago I developed a VI to calculate the Levenshtein Distance. Here it is (LabVIEW 2016). Can you post your VIs in LV2020 or 2019, please. Levenshtein Distance.vi1 point
-
Which is funny because when I took them, I thought the CLD was much harder than the CLA.1 point
-
I used scripting and low-level VI editing to generate a VI with every single decoration object in LabVIEW, at least those with ID's 0 to -4096. There may be some out of that range (and many in that range don't have a valid image associated with them) but this range contains a lot of them. 0 to -4096.vi1 point
-
Looks like someone beat me to it! Oh well, I already exported it (also for 2009, incidentally) so I'll post it here in case it'd be more convenient to use a regular VI file. 0 to -4096.vi1 point
-
Mwuhahahahaha! Three config tokens have escaped your grasp! I modified them specifically for folks like Flarn! They don't appear as plain text anywhere in the EXE (or in any VI for that matter). Do they guard any great secret of LabVIEW? I'm not telling! But you can have fun pouring through the code and looking for interesting bits and trying to figure out what you need to put in your config file. LabVIEW 2013 or later. Good luck.1 point
-
I think that's what the function does behind the scene. A rectangle is simply one case of any number of geometries you can make with this function's inputs. NI Vision rotation algorithm is more complete because it will interpolate colours when the rotated pixel positions are not integers, but otherwise it's the same. The rotation matrix in 2D is exactly what you state above. Rotation of points.vi1 point
-
It adds properties and methods to the LabVIEW VI server hierarchy, mostly application related and presumably project and other such stuff, that NI considers to dangerous, untested, or giving to deep insight into LabVIEW. It is related to scripting but not the same thing. Rolf Kalbermatter1 point
