Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 03/02/2020 in Posts

  1. 4 points
    The main difference between LabVIEW and a C compiled file is that the compiled code of each VI is contained in that VI and then the LabVIEW Runtime links together these code junks when it loads the VIs. In C the code junks are per C source file, put into object files, and all those object files are then linked together when building the final LIB, DLL or EXE. Such an executable image still has relocation tables that the loader will have to adjust when the code is loaded into a different memory address than what its prefered memory address was defined to be at link time. But that is a pretty simple step. The LabVIEW runtime linker has to do a bit more of work, that the linker part of the C compiler has mostly already done. For the rest the LabVIEW execution of code is much more like a C compiled executable than any Virtual Machine language like Java or .Net's IL bytecode, as the compiled code in the VIs is fully native machine code. Also the bytecode is by nature address independent defined while machine code while possible to use location independent addresses, usually has some absolute addresses in there. It's very easy to jump to conclusions from looking at a bit of assembly code in the LabVIEW runtime engine but that does not usually mean that those conclusions are correct. In this case the code junks in each VI are real compiled machine code directly targetted for the CPU. In the past this was done through a proprietary compiler engine that created in several stages the final machine code. It already included the seperation where the diagram was first translated into a directed graph that then was optimized in several steps and the final result was then put through a target specific compiler stage that created the actual machine code. This was however done in such a way that it wasn't to easy to switch the target specific compiler stage on the fly initially so that cross compiling wasn't very easy to add when they developed the Real-Time addition to LabVIEW. They eventually improved that with an unified API to the compiler stages so that they could be switched on the fly to allow cross compilation for the real-time targets which eventually appeared in LabVIEW 7. LabVIEW 2009 finally introduced the DFIR (Dataflow Intermediate Representation) by formalizing the directed graph representation further so that more optimizations could be performed on it and it could eventually be used for LabVIEW 2010 as an input to the LLVM (Low-Level Virtual Machine) compiler infrastructure. While this would theoreticaly allow to leave the code in an intermediate language form that only is evaluated on the actual target at runtime, this is not what NI choose to do in LabVIEW for several reason. The LLVM creates fully compiled machine code for the target which is then stored (in the VI for a build executable or if code seperation is not enabled, otherwise in the compile cache). When you load a VI hierarchy into memory all the code junks for each VI are loaded into memory and based on linker information created at compile time and also stored in the VI, the linker in the LabVIEW runtime makes several modifications to the code junk to make it executable at the location it is loaded and calling into the correct other code junks that each VI consists of. This is indeed a bit more than what the PE loader in Windows needs to do when loading an EXE or DLL, but it isn't really very much different. The only real difference is that the linking of the COFF object modules into one bigger image has already been done by the C compiler when compiling the executable image and that LabVIEW isn't really using COFF or OMF to store its executables as it does all the loading and linking of the compiled code itself and doesn't need to rely on an OS specific binary image loader.
  2. 4 points
    I found this tonight while working on a project: https://remixicon.com/ Really good icon library with modern-looking icons where you can customize the color and size of the icons, then download them as PNG files. I then import them into a LabVIEW pict ring and it's off to the races.
  3. 3 points
    MAX That is all.
  4. 2 points
    I definitely saw somewhere discussions about this that were not LabVIEW 2020 related. The solution was to create(edit) some environment variable for the MKL library itself where some thread configurations were forced explicitedly rather than letting MKL detect the right configuration automatically. See this thread for some discussion of the problem and possible solutions. It seems to be related to latest AMD Ryzen CPUs with a specific SSE architecture. That it falls back to trying to load from the penguin path is however rather strange. Generally these paths only exist in the executable as debug messages related to the source module the specific code was compiled from.
  5. 1 point
    Help Is there a way to parse this JSON string into this cluster in the following way: - The order of elements inside the "Parameters" cluster does not matter. No error - I can use enums - The name of the elements matter. "TWOoo" shouldn't end up populating "two" (even if the order of elements matches the cluster). I want an error here instead of "two" being the default value - Don't care if JSON text has additional elements that aren't in the cluster. No error In short, features: Strict name checking, order independence, error if element missing I've tried so many things but they all fail in one way or the other. See attached snippet
  6. 1 point
    There is no need to go through external code for this. There have been many attempts at crypto libraries that are written natively in LabVIEW and they didn't fail because it is impossible but because nobody is interested to spend some time in searching for them or what a bad word, fork a few dollars over for them. That way authors have put out libraries in the past only to have them forgotten by the public and that is the most sure way for any maintenance work and improvement to be discouraged. Probably the first one was Enrico Vargas who wrote a pretty versatile Crypto library all in LabVIEW somewhere around 2000. It contained many hash and even symmetric algorithmes that were pretty well tested and he was an expert in that subject. And yes he charged something for that library which I found reasonable, I bought a license too and collaborated with him a little on some algorithmes and testing of them. I doubt he made much money with it though, as most Toolkit providers. Eventually it died of because he pursuaded other carrier options and maybe also partly because providing support for something that many were asking for but very few were willing to pay for is a frustrating exercise. A little googling delivers following solutions currently available: https://github.com/gb119/LabVIEW-Bits/tree/master/Cryptographic Services/SHA256 https://lvs-tools.co.uk/software/encryption-compendium-labview-library/ https://gpackage.io/packages/@mgi/hash Interesting disclaimer in the last link! 😀 I would say whoever understands the implications of this, is already aware of the limits in using such functions, but whoever isn't won't be bothered by it. There are several aspects to this, such that calling the same function in .Net or WinAPI (also a possibility) is not necessarily more safe as the actual string is still possibly somewhere in LabVIEW memory after the function is called, no matter how diligent the external library is about clearing any buffer it uses. Also many hashes are mostly used for hashing known sources. which does not have the problem that the original string or byte stream needs to stay secret at all as it is already in memory anyways elsewhere. So for such applications the use of these functions in LabVIEW would not cause any extra concerns about lingering memory buffers that might contain "the secret" after the function has finished.
  7. 1 point
    I might be wrong, but I've never seen a LabVIEW built-in and exported function like nextafter or nexttowards in extcode.h or inside any Managers. But there's a WinAPI implementation according to MSDN: https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/nextafter-functions?redirectedfrom=MSDN&view=vs-2019 So you might try to use those func's from msvcrt.dll by the means of CLFNs. Untitled 1.vi Hope, it helps somehow. Oh, and by the way, for Linux you could use libm.so and libSystem.dylib for macOS to call the mentioned functions, if you're not on Windows.
  8. 1 point
    So I put something together. It implements NextUp and NextDown. I was thinking it would be nice to have a approximation compare that took a number of up/down steps for tolerance. Let me know if you think there is any interest. https://github.com/taylorh140/LabVIEW-Float-Utility If your curious please check it out, and make sure I don't have any hidden Bugs. 😁
  9. 1 point
    Far from perfect, but what I have at the moment, is on Mouse Move, Mouse Down, or Mouse Wheel event (with the limit to 1 just like you), read the Top Left Visible Cell. This gives the Tag and Column number. Using a second property node, write the Active Item Tag, and Active Column Number to what was just read. And then read the Active Cell Position Left. I then use a feedback node to see if the Top Left tag, Column, or Cell Position Left has changed from the last time the event was fired. If it hasn't do nothing. If it has, then go do what it takes to either shift the current image up and down, or left and right. Left and right are property nodes on the controls to move them, but value can stay the same. As for shifting and image up and down, I use Norms low level code found here, which is pretty fast, but I did have to add a couple of op codes to it since his post. Alternatively you might be able to set the image origin to get the same effect but that uses a property node, as apposed to the value of the image control.
  10. 1 point
    You can change your terminals to dynamic dispatch at any time by choosing This Connection Is > Dynamic Dispatch Input/Output (Required) at the terminal block. You'll find that this option is only available for class inputs and outputs. To change your VI from static dispatch to dynamic dispatch, simply change both - input and output terminals - to dynamic dispatch.
  11. 1 point
    Here is a KB article regarding the differences between controls and indicators, local variables and property nodes: Control/Indicator, Local Variable, and Value Property Node Differences There is also an article that explains the different threads in LabVIEW and what they are used for: How Many Threads Does LabVIEW Allocate?
  12. 1 point
    It works for me just fine on latest versions of Chrome, Firefox and Edge. Stupid question: have you tried clearing cookies and cache? On a side-note:
  13. 1 point
    I don't know if a "proper" way exists, hence https://forums.ni.com/t5/NI-Package-Management-Idea/Add-ability-to-reclaim-space-taken-by-cached-packages/idi-p/4024241 I've been manually deleting the *.nipkg files, and the very old folders in C:\ProgramData\National Instruments\Update Service\Installers, without ill effects to my day-to-day work. I think NI Package Builder or LabVIEW Application Builder can't bundle a dependency if its package or local installer is gone, however.
  14. 1 point
    I did an English/Russian project last year based on NI Unicode tool: https://forums.ni.com/t5/Reference-Design-Content/LabVIEW-Unicode-Programming-Tools/ta-p/3493021?profile.language=en
  15. 1 point
    I don't know which debugging technique you are employing, but if it's at edit time, you could temporarily change the scope of that method to be public. It's not a silver bullet, but works in a crunch, Now, if it's at runtime, I don't think there is any way apart from writing extra code into your class.
  16. 1 point
    I feel like you're stretching here to make similarities seem to exist when tey do not. Your argumentation for the LVRT is obviously flawed. If that were the case, every program which accesses Win32 functions is virtualised? No. It isn't. It's linking between actual compiled machine code. It is not platform-independent, it is compiled for the current machine architecture. So is VisualC++ vistualised because there are VC++ runtimes installed on Windows? To paraphrase "the Princess Bride".
  17. 1 point
    First thing, you need to add an INI token to LabVIEW.ini file, so you can display Unicode in LabVIEW. https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z000000kJRNSA2&l=en-CA That allows to at least display the characters when copied on the front panel or block diagram (instead of showing up as a "?"). However it does not work with the caption text unless your OS environment supports the language pack. As you can see below, my OS has French and English installed, but not Chinese or Cyrillic characters. I can display the unicodes when I paste directly from a source, so you can try it on a virtual machine with the proper regional settings, and that should work. ** Edit: there is probably another step missing, because I can't get it to show up on my VM with a Russian keyboard installed... ** Edit #2: OK, it works. Turns out that the cyrillic character I had copied from google was not part of the Russian keyboard. I have no way to tell :-). But now it works with another character generated from the russian keyboard itself: "ф".
  18. 1 point
    Oops, looks like I missed when this feature went in. As AristosQueue mentioned in a reply to your Idea Exchange post, you can use the Dependencies > Missing Dependency Names and Dependencies > Missing Dependency Paths properties of the GObject class in LabVIEW 2015 and later. Can you verify that these properties give you want you need, and I'll close out the Idea Exchange post as Already Implemented?
  19. 1 point
  20. 1 point
    Long time didn't see those hackish threads about LV internals on LAVA, so here's something. As always it's strictly experimental and "just for fun", so don't run the things below on your work systems, otherwise you'll easily get LV crashed, your files deleted and your disk reformatted. You're warned. As some of you maybe know, starting from LV 2017 there exists hidden ArrayMemInfo node, available w/ the scripting. It provides some info on the input array including its data pointer. ArrayMemInfo.vi My first discovery was that this pointer is stable and doesn't change until VI is stopped. So its common use case could be the memory copy operations for some third-party DLLs. It's obvious and it's kinda boring usage. Playing with it after some time I discovered that I'm able to not only read but also write to that pointer, able to get its handle, resize the array etc. - read on to see the examples. I could conclude that this way it's possible to manipulate Transfer Buffer or DCO's Transfer Data or whatever it's called. From now on I started to question myself, where does ArrayMemInfo node take that pointer from and could we do the same without this instrument. And here was another big surprise for me - yes, we can! LV familiar CLFNs easily do that, when you configure the input parameter as: Type = Adapt to Type Constant = Yes (checked) Data format = Pointers to Handles Actually you could set Type to Array and set Data type and Array format to the actual data type of your array, it's not that important. What is important, Constant checkbox should be checked. In this case LV not only passes the native data types, but also does not create the input data copy, providing the pointer/handle to the Transfer Buffer's data instead! I was very surprised, because it was there from LV 2009, but I didn't pay attention to it. I also did not find it to be described somewhere in the official documents. Knowing that I have replaced ArrayMemInfo with CLFNs in my basic samples. Here's how we could do read/write of a common numeric indicator: numeric_test.vi And this is the same for a common numeric array: array_test.vi As you can see, there's a major disadvantage of the whole technique - one should pass the wire to the indicator in a loop to get the indicator updated in time. Nevertheless there exists a pair of VIs, that do the similar and do not require any wires to be connected to a control/indicator. I'm saying about Get Control Values by Index and Set Control Values by Index, first introduced in LV 2013. So I dug these VIs deeper and came to these two internal functions, which are used by the foresaid VIs and ArrayMemInfo behind the scenes. ReadDCOTransferData: ReadDCOTransferData.vi WriteDCOTransferData: WriteDCOTransferData.vi As their top-level wrappers, these functions don't require any wires and care about low-level operations, i.e. no need to resize the array w/ NumericArrayResize, manually set its dimensions and actual data w/ MoveBlock etc. Although you don't get absolute control over a control or indicator (e.g., its pointer or handle), you definitely don't need it as all the work is done by ReadDCOTransferData/WriteDCOTransferData. Finally we came to the conclusion that the best has already been implemented. But... if you really want... there's a way to get Operate Data pointer for a generic control/indicator. numeric_test_ncg.vi array_test_ncg.vi However I'd like to notice, that this technique contains a lot of pitfalls besides shown low-level manipulations. Front Panel object doesn't get updated at all, for example, so it's required to redraw it manually w/ Invalidate method (or click on the object or hide-show the window). Local variables of the object don't get updated too, even when the object is invalidated. I'm sure, there are other things not done, which usually LV does on normal read/write. So, these diagrams should be perceived as the concept only. In the end I'd say that I hardly imagine, where I could really use any of these non-standard r/w methods. Surely in LV I will continue to use native Get Control Values by Index and Set Control Values by Index VIs, because they do everything, what I want. There might be one use-case though, when communicating with some external code. It would be easier to write debug (or any other) data from a custom DLL into LV's indicator directly, not using PostLVUserEvent or DbgPrintf. Extcode.h doesn't expose many APIs to the user, so calling ReadDCOTransferData/WriteDCOTransferData might be preferred inside the library. But I'm not going to check/test it now, because I haven't written DLLs for LabVIEW for a while.
  21. 1 point
    I have only done minimal coding in TwinCAT. Been involved on the side with such a project where we had to provide support to get a few NI EtherCAT chassis to work in a system with Beckhoff controllers, Bronkhorst Flow Controllers and several other devices. While the Bronkhorst, Festo and NI IO chassis were on EtherCAT there were other devices that were using Modbus and Ethernet communication and that turned out to be more complex than initially anticipated to get working in TwinCAT. I was happy to sit on the side and watch them trying to get things eventually solved rather than getting my own hands dirty with TwinCAT programming. 😀
  22. 1 point
    Figured someone over here might be interested... over the weekend, I built a new right-click menu item plugin for editing set controls and constants. The UI is bad, but it works; if anyone wants to work on the UI, you're welcome to do so. Building a map editor would follow the same pattern as the set editor. https://forums.ni.com/t5/LabVIEW-Shortcut-Menu-Plug-Ins/Edit-A-Set-Value-llb/ta-p/4020190
  23. 1 point
    Absolutely. This is what I have had to do to get things working. Unfortunately this is not the only VI I used that calls into this DLL and this is definitely a game of whack-the-mole I don't have the energy to play. The problem is more... "why does this happen?". I am sure you know how frustrating it is when something works perfectly in the IDE but then has trouble in an executable. I thought I was at the point in my LabVIEW journey that I had experienced first-hand most of the weirdness that sometimes comes with the territory. This is altogether a new one for me though.
  24. 1 point
    The New Data Value Ref and Delete Data Value Ref nodes will be able to be in inline VIs (and thus malleable VIs) in LV 2020.
  25. 1 point
    It is not a bug. It should break for any unsigned integers because that's how the "negate" method works.
  26. 1 point
    pylabview could do it, but it looks like there are differences in "Virtual Instrument Tag Strings" section, and the parser can't read that section ATM: So - only NI can help you, unless you are willing to help in development of pylabview.
  27. 1 point
    Your best bet is to put your request here: NI Forums - Version Conversion
  28. 1 point
    Found a fix for this. It should be fixed in LV 2020. The bug ONLY affects copying from a 1-element cluster of variant to a variant. Or a cluster of cluster of variant to a variant. Or... you get the idea... "any number of cluster-shells all containing 1 element, culminating in a variant" being copied to a variant. This was a fun bug... consider this: The memory layout for an byte-size integer is { 8 bits } The memory layout for a cluster of 1 byte-size integer is { 8 bits } They are identical. "Cluster" doesn't add any bits to the data. That's just the type descriptor for the data in that location. This is true for any 1-element cluster: the memory layout of the cluster is the same as the memory layout for the element by itself. This is true even if the 1 element is a complex type such as a nested cluster of many elements or an array. When a VI is compiling, if data needs to copy (say, when a wire forks), LabVIEW generates a copy procedure in assembly code. For trivial types such as integers, the copy is just a MOV instruction in assembly code. But for compound types, we may need to generate a whole block of code. At some point, the complexity is such that we would rather generate the copy procedure once and have the wire fork call that procedure. We want to generate as few of those as we have to -- keeps the code segment small, which minimizes page faults, among other advantages. We also generate copy procedures for compound coercions (like copying a cluster of 5 doubles into a cluster of 5 integers). Given all that, LabVIEW has some code that says, "I assume that type propagation has done its job and is only asking me to generate valid copy procs. So if I am asked to copy X to Y, I will remove all the 1-element shells from X and all the 1-element shells from Y, and then I will check to see if I have an existing copy proc." Nowhere in LabVIEW will we ever allow you to wire a 1-element cluster of an int32 directly to an int32. So the generator code never gets that case. In fact, the only time that we allow a 1-element cluster of X to coerce directly to X is... variant. The bug was that we were asking for a copy proc for this coercion, and the code was saying, "Oh, I have one of those already... just re-use copy-variant-to-variant." That will never crash, but it is also definitely not the right result! We had to add a check to handle variant special because variant can eat all the other types. So if the destination is variant, we have to be more precise about the copy proc re-use. I thought this was a neat corner case.
  29. 1 point
    @The Q My library is not complete by any means, but what works really works... I've followed a TDD approach. So far I've covered only 79 requirements out of 141, and I haven't started any branches to support MQTT 5.0 yet... So it covers the basic needs such as shown in the presentation you're referring to, but nothing fancy like QoS 1 and 2 and the likes. (or TLS for that matter). If you want something more complete, I'd check Wireflow's implementation first. Now, I'd really like to get other folks to contribute to my Open Source Project. It would give me a moral boost to continue pushing it further. In the meantime, thanks for the advertisement!!!
  30. 1 point
    UPDATE: I found the issue. Had to close newly opened references in "gs_getdata2d.vi" First of all, thanks for the library. I am having one issue though. After about 10 or 15 iterations, the system errors out and the error handling (including closing references) is unable to reset itself. I suspect somewhere, there is a reference that is not closing properly as when I stop the program and restart, everything works again. Here is the code i'm using for preliminary testing.
  31. 1 point
    I thought this was an interesting exercise so here is my attempt. OpenG has some image tools and one of them is the ability to open a GIF, but for some reason it crapped out and died with your GIF even after resaving it to something much smaller. I did find some other GIF API over on the dark side and instead used that. Attached is a zip, extract it and run Demo Saving Button. It will show the first image. Then when you click the image it cycles through the first half of the GIF and waits for the simulated save process to complete. Once it is complete it rotates through the second half of the images, and then after a few seconds returns back to the first. Parsing of the GIF takes time so I put in the GIF images as a constant, along with the code to parse the GIF. I also set the pane to be the color of the (0,0) pixel in the hopes it will blend in better. Honestly this could be turned into a QControl and be made very seemless. Demo Saving Button Gif.zip
  32. 1 point
    To all the warriors out there a heads up if your computer is suddenly running at 100% CPU. In the Task Manager, the Task Manager process will gobble up any CPU available. I swore it was a virus. A 3rd party tech support knew about this. Turns out a camera did not shut down correctly. To avoid Windows willy-nilly starting a process it likes that will screw up tasks such as frame grabbing response this Windows setting is used. Use PowerShell & run the following: "C:\Program Files\Point Gey Research\FlyCap2 Viewer\bin64\PGRIdleStateFix.exe enable"
  33. 1 point
  34. 1 point
    Add SuperSecretListboxStuff=True to your labview.ini , reload LabVIEW new menu items will show up when you right click on MLC control. Read this thread
  35. 1 point
    Data communication via serial port (or any communication for that matter) takes time. The receiver must receive, process and reply to the data send by the sender. I suggest having a look at the examples in the Example Finder, they are well documented. Search for "Simple Serial" or just "Serial". Your VI sends data and immediately reads however many "Bytes at port" are available. There is no delay between sending and reading: Here I modified your implementation such that it waits up to one second for data to arrive:
  36. 1 point
    Locally, SQLite. Remotely, MySQL/MariaDB.
  37. 0 points
    I have no idea what other DLLs that one calls. This is really weird.
  38. 0 points
    Yesterday I was giving a presentation titled Software Development with LabVIEW Object-Oriented Programming at NIDays in Finland. As part of the talk, I prepared Drawing Tool, an example application that utilizes LabVIEW Object-Oriented Programming and tries to introduce several LabVIEW Object-Oriented Programming techniques. It is a simple drawing application for drawing various shapes of different colors on a drawing area. The application is based on a state-machine where the state of the application is presented with several LabVIEW objects. The application demonstrates how a simple state machines based application can be created using LabVIEW classes as state parameters. ALERT: The intention of the application is to show various LVOOP features and programming techniques. To make the application easily understandable to LVOOP beginners, I used architectural choices that make the application more readable rather than more sophisticated and easier to extend. So the example application is not intended as a guide how to develop LVOOP application but merely as an example introducing how several LVOOP techniques can be used in application development. Download Drawing Tool Download the NIDays presentation slides


×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.