-
Posts
282 -
Joined
-
Last visited
-
Days Won
10
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Manudelavega
-
Best practices for computing user equations
Manudelavega replied to Manudelavega's topic in LabVIEW General
Thanks, that fixed my issue, but now it is stuck on "starting expression tester, please wait" (or something similar) and I need to kill the LabVIEW process after a few minutes -
Best practices for computing user equations
Manudelavega replied to Manudelavega's topic in LabVIEW General
I tried to open the Expression Tester but it can't find one of the dependency... -
Best practices for computing user equations
Manudelavega replied to Manudelavega's topic in LabVIEW General
This is really nice! Koodos! -
Best practices for computing user equations
Manudelavega replied to Manudelavega's topic in LabVIEW General
I started implementing exactly what you're describing Tim. This RPN stuff is pretty neat -
Best practices for computing user equations
Manudelavega replied to Manudelavega's topic in LabVIEW General
I haven't had great experiences with DLL in the past so I wanted to stay away from it -
Best practices for computing user equations
Manudelavega replied to Manudelavega's topic in LabVIEW General
I tried, but no, Eval Formula Node.vi doesn't support min and max. Only the formula node supports it (along with modulo and remainder and others that I also need). -
A few years ago I wrote my own equation computing algorithm for my company's flagship software. The user will write equations using variable names and constants (for example a=2*b+3), and those equations run continuously every 100ms. The pros The user can add Min and Max expressions inside the equation. For example a=Min(b,c)+2. The syntax supports parenthesis. For example a=3*(b+c). The limitations You can have several operators but their priority is not respected. For example the result of 1+2*3 will be 9 instead of 7. The user has to write 1+(2*3) or 2*3+1 to get the correct result. You can't put an expression inside a "Power" calculation. For example, you can do a+b^c but you can't do a^(b+c). You would need to create a new variable d=b+c and then do a+d, so now you have 2 equations running in parallel all the time instead of 1. There is no support (even though it wouldn't be hard to add) for sin, cos, tan, modulo, square root... I am now thinking of using a built-in LabVIEW feature (or one the community might have created ) in order not to reinvent the wheel completely. Surely I am not the only person who needs to compute equations. I looked at vi.lib\gmath\parser.llb\Eval Formula String.vi and it seems to answer 90% of my needs, it is simple to use, but it doesn't support Min and Max expressions and writing a hybrid system would be complicated. What do people use out there? If I need to reinvent the wheel, I found interesting resources such as https://en.wikipedia.org/wiki/Shunting-yard_algorithm and https://en.wikipedia.org/wiki/Operator-precedence_parser so I think I can pull it off, but it's going to be very time consuming! Cheers
-
Thanks for the explanation. Yes, bad terminology then...
-
I read Darren's excellent article about creating healthy Dialog box (http://labviewartisan.blogspot.ca/2014/08/subvi-panels-as-modal-dialogs-how-to.html) I thought using a dynamic call and selecting the option Load and retain on first call would prevent loading the subVI in memory when loading but not running the caller VI. I created a super light project to test it but didn't get the expected behavior: - If I don't open the Caller, I don't see any VI in memory (except for the VI that reads the VIs in memory of course). - If I open the Caller, I see both the Static Callee and the Dynamic Callee, whereas I expected to see only the Static Callee. Did I misunderstand how that option works? VIs in memory.zip
-
Any difference between application 64 bits vs 32 bits?
Manudelavega replied to ASalcedo's topic in LabVIEW General
If at some point you need to integrate external code you'll likely find out that they only work with 32-bit. So unless your scope is well defined and you know it won't change, you're taking a big risk by moving to LV 64 bit.- 8 replies
-
- application
- 64 bits
-
(and 1 more)
Tagged with:
-
I see, so long story short, you don't really extract the PNG and feed it to .NET, you rather manipulate the ICO and then feed it to .NET still as an ICO. Correct?
-
My question is about the piece of code in this screenshot of Read Ico File.vi. You seem to reassemble an .ico file again. Is that what you're doing?
-
I like to understand the code I'm using instead of just blindly using it "since it works" I'm looking at your Read Ico File.vi. I found a Wikipedia page that describes the format of an .ico file so I understand perfectly what you're doing when you index the bytes coming from it. However I can't find anything that describes the content of a .NET Image Byte Array, so I don't understand what you're doing when you create that array. Can you point me to a web page that would explain this format? Cheers!
-
Perfect
-
Yeah no time to dig through VB code for now. Thanks, I appreciate the help
-
Looks neat, thanks hooovahh. Now my problem is that it always uses the first (higher resolution) image of my .ico file. I need some code to choose which image to use. The vi.lib\Platform\icon.llb\Read Icons from ICO File.vi VI returns an array of clusters named icon data but those clusters are different from the imagedata.ctl typedef used by your VIs..
-
Reviving an old thread! It's 10 years later, and the VI rolfk mentioned looks like it hasn't got any love. Any chance you know of an updated equivalent that would support 32 bit depth?
-
Thanks for all those answers!
-
Uhm, If I reall y want/need to understand this I guess I need to dig into type descriptors and so on?
-
I was shocked to see that LabVIEW manages to execute this code properly. Can anybody explain why it works? Aren't clusters and arrays 2 different things?
-
I've spent my fair share of time dealing with the Advanced TDMS API to achieve the same thing: decimating the data to plot them in a graph. My benchmarking showed that there is a sweet spot where performing one Read operation for each sample you want to keep (using the Offset hooovahh was talking about) starts being more efficient than performing a single Read operation returning a giant amount of data and then keeping only 1 every 100 samples for example. So I'd recommend looking into that if you go with a TDMS-based solution. And since we are dealing with files on disk: SSD drive!! Forget about HDD, way too slow, your users will have to wait several seconds each time they need to refresh the graphs. And yes, cut the file size by half by using SGL instead of DBL! This only poses an issue for the timestamp since LabVIEW uses DBL. I worked around this by using 3 SGL columns to represent the timestamp. You're on the right track!
-
Also is "LabVIEW Feedback for NI" the right forum for this?