Jump to content

LogMAN

Members
  • Posts

    717
  • Joined

  • Last visited

  • Days Won

    81

Posts posted by LogMAN

  1. Welcome to LAVA 🎉

    2 hours ago, maristu said:

    I’ve read that the performance of labview is better with the compiled code in the VI.

    Not sure where you read that, here is what the LabVIEW help says:

    Quote

    LabVIEW can load source-only VIs more quickly than regular VIs. To maximize this benefit, separate compiled code from all the files in a VI hierarchy or project.

    -- Separating Compiled Code from VIs and Other File Types - LabVIEW 2018 Help - National Instruments (ni.com)

     

    2 hours ago, maristu said:

    Could it be a good idea to unmark the separate compiled code programmatically on each installed file (vi, ctl, class, lvlib… )?

    I don't see the benefit. Your projects will take longer to load and if the compiled code breaks you can't even delete the cache, which means you have to forcibly recompile your VIs, which is the same as what you have right now.

  2. 2 hours ago, rharmon@sandia.gov said:

    I was thinking I would just branch off the second project and make the necessary changes to make the second project work. It would not be my intension to ever re-unite the branches. 

    What you describe is called a fork

    Forks are created by copying the main branch of development ("trunk") and all its history to a new repository. That way forks don't interfere with each other and your repositories don't get messy.

  3. In my opinion NI should finally make up their mind to whether objects are inherently serializable or not. The current situation is dissatisfying.

    There are native functions that clearly break encapsulation:

    Then there is one function that doesn't, although many users expect it to (not to mention all the other types it doesn't support):

    Of course users will eventually utilize one to "fix" the other. Whether or not it is a good design choice is a different question.

  4. 11 minutes ago, rharmon@sandia.gov said:

    I think reading through these posts I'm leaning toward Subversion or Mercurial... Probably Mercurial because from my conversation leading me toward source control touched on the need to branch my software to another project.

    If you are leaning towards Mercurial you should visit the Mercurial User Group: https://forums.ni.com/t5/Mercurial-User-Group/gh-p/5107

  5. 9 hours ago, LogMAN said:

    The user interface does not follow the dataflow model.

    9 hours ago, G-CODE said:

    I think it's really helpful to point that out.

    7 hours ago, Mads said:

    I think this is about as wrong as it can get.

    Please keep in mind that it is only my mental image and not based on any facts from NI.

    9 hours ago, G-CODE said:

    Thinking about this.... I can't figure out if now we are trying to explain why it's expected behavior or if we are trying to justify unexpected behavior (or something in between). 🙂

    Perhaps both. If we can understand the current behavior it is easier to explain to NI how to change it in a way that works better for us.

    7 hours ago, Mads said:

    If an indicator is wired only (no local variables or property nodes breaking the data flow) it shall abide the rules of data flow. The fact that the UI is not synchronously updated (it can be set to be, but it is not here) can explain that what you see in an indicator is not necessarily its true value  (the execution, if running fast, will be ahead of the UI update)- but it will never be a *future* value(!).

    Here is an example that illustrates the different behavior when using indicators vs. property nodes. The lower breakpoint gets triggered as soon as the loop exits, as one would expect.  I have tried synchronous display for the indicator and it doesn't affect the outcome. Not sure what to make of it, other than what I have explained above 🤷‍♂️

    image.png.99aa2db9f7d548a2f632e0526e8d19de.png

    7 hours ago, Mads said:

    A breakpoint should really (we expect it to) as soon as it has its input value cause a diagram-wide break, but it does not. Instead it waits for the parallell code to finish, then breaks.

    I agree, this is what most users expect from it anyway. It would be interesting to hear the reasoning from NI, maybe there is a technical reason it was done this way.

  6. Disclaimer: The following is based on my own observations and experience, so take it with a grain of salt!

    15 hours ago, G-CODE said:

    How is it possible to update an indicator if the upstream wire has a breakpoint that hasn't paused execution?

    The user interface does not follow the dataflow model. It runs in its own thread and grabs new data as it becomes available. In fact, the UI update rate is much slower than the actual execution speed of the VI -->VI Execution Speed - LabVIEW 2018 Help - National Instruments (ni.com). The location of the indicator on the block diagram simply defines which data is used, not necessarily when the data is being displayed. In your example, the numeric indicator uses the data from the output terminal of the upper while loop, but it does not have to wait for the wire to pass the data. Instead it grabs the data when it is available. Because of that you can't rely on the front panel to tell dataflow.

    Execution Highlighting is also misleading because it isn't based on VI execution, but rather on a simulation of the VI executing (it's a approximation at best). LabVIEW simply displays the dot and postpones UI updates until the dot reaches the next node. Not to forget that it also forces sequential execution. It probably isn't even aware of the execution system, which is why it will display the dot on wires that (during normal execution) wouldn't have passed any data yet.

    Breakpoints, however, are connected to the execution system, which is why they behave "strangely". In dataflow, data only gets passed to the next node when the current node has finished. The same is true for diagrams! The other thing about breakpoints to keep in mind is that "execution pauses after data passes through the wire" --> Managing Breakpoints - LabVIEW 2018 Help - National Instruments (ni.com)

    In your example, data passes on the wire after the block diagram is finished. Here is another example that illustrates the behavior (breakpoint is hit when the block diagram and all its subdiagrams are finished):

    image.png.909971d0c51b37eab96f2d02bdeb9712.png

    Now think about indicators and controls as terminals from one block diagram (node) to another.

    image.png.3728aa3ec95290754ab3b4d1d0e2bc00.png

    According to the dataflow model, the left diagram (Block Diagram A) only passes data to the right diagram (Block Diagram B) after it is complete. And since the breakpoint only triggers after data has passed, it needs to wait for the entire block diagram to finish. Whether or not the indicator is connected to any terminal makes no difference.

    This is also not limited to indicators, but any data that is passed from one diagram to another:

    image.png.3c626e7179aa0a7cd00f52a376217da0.png

    Hope that makes sense 😅

     

  7. A Static VI Reference is simply a constant Generic VI Reference. There is no way to distinguish one from another.

    It's like asking for the difference between a string constant and a string returned by a function.

    image.png.61a89f3cbc440f7f977043885c92e599.png

    image.png.ab57b310b8bbdd43fd413d04fe0b4806.png

    The Strictly Typed VI Reference @Darren mentioned is easily distinguishable from a Generic VI Reference (notice the orange  on the Static VI Reference).

    image.png.1ff3698c7c877ecc0c26748fbec9335b.png

    image.png.171f3fad1025c289730826b0a6db4787.png

    However, if you wire the type specifier to the Open VI Refnum function, the types are - again - indistinguishable.

    image.png.cf4f2b8db94e390020c355adf4fd9422.png

    image.png.039ead787aad17fa2558614aac2437ca.png

    Perhaps you can use VI Scripting to locate Static VI References on the block diagram?

  8. 6 hours ago, infinitenothing said:

    It's more the output of number to string conversion that's not fixed that's my issue at the moment.

    The number to string functions all have a width parameter: Number To Decimal String Function - LabVIEW 2018 Help - National Instruments (ni.com)

    As long as you can guarantee that the number of digits does not exceed the specified width, it will always produce a string with fixed length (padded with spaces).

  9. I discovered a potential memory corruption when using Variant To Flattened String and Flattened String To Variant functions on Sets. Here is the test code:

    2053102155_LV2019SP1f3(32-bit)PotentialMemoryCorruptionwhen(de-)serializingSets.png.e31ac61a8ef3ee1d71ad471d67565015.png

    In this example, the set is serialized and de-serialized without changing any data. The code runs in a loop to increase the chance of crashing LabVIEW.

    Here is the type descriptor. If you are familiar with type descriptors, you'll notice that something is off:

    image.png.36fcb733de6a787b78776e473e0540d9.png

    Here is the translation:

    • 0x0008 - Length of the type descriptor in bytes, including the length word (8 bytes) => OK
    • 0x0073 - Data type (Set) => OK
    • 0x0001 - Number of dimensions (a set is essentially an array with dimension size 1) => OK
    • 0x0004 - Length of the type descriptor for the internal type in bytes, including the length word (4 bytes) => OK
    • ???? - Type descriptor for the internal data type (should be 0x0008 for U64) => What is going on?

    It turns out that the last two bytes are truncated. The Flatten String To Variant function actually reports error 116, which makes sense because the type descriptor is incomplete, BUT it does not always return an error! In fact, half of the time, no error is reported and LabVIEW eventually crashes (most often after adding a label to the numeric type in the set constant). I believe that this corrupts memory, which eventually crashes LabVIEW. Here is a video that illustrates the behavior:

    Can somebody please confirm this issue?

    LV2019SP1f3 (32-bit) Potential Memory Corruption when (de-)serializing Sets.vi

  10. On 1/26/2021 at 2:39 PM, pawhan11 said:

    - variant input and return 1d array of variants when input variant is array  (any dimension any data type in array)

    There is a VI in OpenG LabVIEW Data Library that does this for you.

    image.png.90b85e545ea3b2dc3ed5a5e66a7fb779.png

    On 1/26/2021 at 2:39 PM, pawhan11 said:

    - variant input and return array of variants when input variant is  set (any data type in set)

    -variant input and return variant pairs of key values when input is Map (any data type of key and value in map)

    For Maps and Sets I can get type info using Type Parsing Library but not the actual, the only way i see is digging into type descriptors...

    I took this as a challenge and added two VIs to my library on GitHub - https://github.com/LogMANOriginal/LabVIEW-Composition

    Decompose Map extracts variant keys and values of variant maps

    1766333078_DecomposeMapExample.png.f9a08bccce69373bc5bfe137149c091f.png

    Decompose Set extracts variant elements of variant sets

    2086721673_DecomposeSetExample.png.da48ecfd9dff306afdd8254567b4ea54.png

    I have successfully tested these VIs with various different types, but there could still be bugs. Let me know if you find anything. I strongly discourage using these in production!

    • Thanks 1
  11. It's a separate library. Object composition was actually much more difficult to figure out than the other way around. I have attached the library for LV2017 (without test suites and package configuration). I'll also put this on GitHub in the near future.

    Here is an example that overwrites elements in the private data cluster (the outer IPE addresses the class hierarchy).

    Example.png.4b39562982bbc80a73656a57be736311.png

    Here is an example that uses JSONtext to extract data from a private data cluster. I was looking into this particular case as a way to transition from clusters to/from objects ;)

    1250202763_ExampleJSON.png.ea55ce778856fa7a4526effa670ea228.png

    Both examples are included in the package.

    Object Decomposition LV2017.zip

    • Thanks 1
  12. 21 hours ago, bjustice said:

    This was meant as a proof of concept to see if it can be done and if it's something worth investigating. I should probably mention that this branch has a few bugs that I haven't fixed yet.

    22 hours ago, bjustice said:

    Internally, we made the early decision that we didn't want to directly access class private data through string flattening or by inspecting the class' *.ctl file.

    Certainly not something I would use in production right now but still, I believe there is some value in this - especially for general-purpose libraries like JSONtext. Anyway, I'll back save and upload when I have access to LV. By the way, the details are explained on the Wiki: LabVIEW Object - LabVIEW Wiki

    I haven't found a better way to do this without adding (or scripting) methods to every class. The only function that currently breaks encapsulation natively is Flatten To XML, which has its own limitations.

  13. 4 hours ago, Neil Pate said:

    OK, so deleting the branch on the remote only deletes it from being used in future, it still exists in the past and can be visualised?

    Only the name is deleted, commits are left untouched. It is actually possible to restore the branch name if you know the commit hash - https://stackoverflow.com/a/2816728

    This can be useful if you deleted a branch before it was merged into master, or if you want to branch off a specific commit in the history that is currently unlabeled.

    Here is some documentation from Atlassian, generally applicable to GitHub as well:

  14. 13 hours ago, Neil Pate said:

    I still don't really get this. I want to see the branches when I look in the past. If the branch on the remote is deleted then I lose a bit of the story of how the code got to that state don't I?

    The Network Graph mentioned by @JKSH does give you some visualization on GitHub. I personally prefer the visualization in Sourcetree and bash.

    Here is an example for GitHub - microsoft/vscode: Visual Studio Code

    The command I use is

    git log --oneline --graph

    image.png.b1376eac429042b344d43c33e31b004a.png

    You can see that branches still exist even after merging. Only the name of the branch, which is just a fast way to address a specific commit hash, is lost (although it is typically mentioned in the commit message).

    That said, some branches can be merged without an explicit merge commit. This is called "fast-forward" - https://stackoverflow.com/a/29673993. Maintainers on GitHub can decide if they always want a merge commit, or not.

  15. 2 hours ago, govindsankarmr said:

    The error is -1073807339 , VISA Read in MB Master.lvlib:MB_ADU_RTU.lvclass:RX ADU.vi:1->MB Master.lvlib:MB_Master_Serial.lvclass:Querry.vi:1->MB Master.lvlib:Read Holding Registers.vi:1->Untitled 2. Can anyone help me what is the reason of this error.

    Here is some information about this error: VISA Error -1073807339 (0xbfff0015) Timeout Expired Before Operation Completed - National Instruments (ni.com)

    There could be many reasons for a timeout error. The error message only indicates that a timeout occurred before a reply was received, which is not very useful. NI IO Trace might give you some additional clues.

    Maybe put the master in a shift-register on your while loop. Not sure if that makes a difference.
    image.png.feb9eeba807a99128c7da381aff56ea5.png

    1 hour ago, Neil Pate said:

    It's been a while but when using the NI Modbus library I found there was some weirdness regarding what the base of the system is. This might just be my misunderstanding of Modbus but for example to read a holding register that was at address 40001 I would actually need to use the Read Holding Register VI with an address of 1 (or 0).

    This is specified in the Modbus Application Protocol, although implementations vary between 1-based and 0-based. The mapping of addresses is typically resolved internally.

    Quote

    The Request PDU specifies the starting register address and the number of registers. In the PDU Registers are addressed starting at zero. Therefore registers numbered 1-16 are addressed as 0-15.

    -- MODBUS Application Protocol 1 1 b

     

  16. 2 hours ago, Neil Pate said:

    But what is this all about? I have done a bit of digging and it seems the current best practice is indeed to delete the branch when it is no longer needed. This is also a totally strange concept to me. I presume the branch they are talking about here is the remote branch?

    You got it right. "Delete branch" will delete the branch on your fork. It does not affect the clone on your computer. The idea is that every pull request has its own branch, which, once merged into master, can safely be deleted.

    2 hours ago, Neil Pate said:

    I still am not sure I 100% understand the concept of local and remotes having totally different branches either!

    This can indeed be confusing if you are used to centralized VCSs.

    In Git, any repository can be remote. When you clone a Git repository, the source becomes remote to the clone. It doesn't matter if the remote is on your computer or on another server. You can even have multiple remote repositories if you wanted to.

    You'll notice that the clone - by default - only includes the master branch. Git allows you to pull other branches if you want, but that is not mandatory. Likewise, you can have as many branches of your own without having to push them to the remote (sometimes you can't because they are read-only).

    On GitHub, when you fork a project, the original project becomes remote to your fork (you could even fork a fork if you wanted to...). When you clone the fork, the fork becomes remote to your clone. When you add a branch, you can push it to your fork (because you have write-access). Then you can go to GitHub and open a pull request to ask the maintainer(s) of the original project to merge your changes (because you don't have write-access). Once merged, you can delete the branch from your fork, because the changes are now part of master in the original project (there is no reason to keep it).

    Notice that the master branch on your fork is now behind master of the original project (because your branch got merged). Notice also that this doesn't affect your local clone (you have to delete the branch manually). You can now update your fork on GitHub, pull from your fork, and finally delete the local branch (Git will warn you about deleting branches that have not been merged into master).

    There is a page which describes the general GitHub workflow: Understanding the GitHub flow · GitHub Guides

    Hope that helps.

    • Like 2
  17. For starters, there are a few DWarns:

    c:\nimble\penguin\labview\components\mgcore\trunk\18.0\source\ThEvent.cpp(216) : DWarn 0xECE53844: DestroyPlatformEvent failed with MgErr 42.
    e:\builds\penguin\labview\branches\2018\dev\source\typedesc\TDTableCompatibilityHack.cpp(829) : DWarn 0xA0314B81: Accessing invalid index: 700
    e:\builds\penguin\labview\branches\2018\dev\source\objmgr\OMLVClasses.cpp(2254) : DWarn 0x7E77990E: OMLVParam::OMLVParam: invalid datatype for "Build IGL"
    e:\builds\penguin\labview\branches\2018\dev\source\typedesc\TypeManagerObjects.cpp(818) : DWarn 0x43305D39: chgtosrc on null! VI = [VI "LSD_Example VI.vi" (0x396f46b8)]
    e:\builds\penguin\labview\branches\2018\dev\source\UDClass\OMUDClassMutation.cpp(1837) : DWarn 0xEFBFD9AB: Disposing OMUDClass definition [LinkIdentity "StatusHistory.lvclass" [ Poste de travail] even though 5 inflated data instances still reference it.
    e:\builds\penguin\labview\branches\2018\dev\source\UDClass\OMUDClassMutation.cpp(1837) : DWarn 0xEFBFD9AB: Disposing OMUDClass definition [LinkIdentity "Delacor_lib_QMH_Message Queue V2.lvclass" [ Poste de travail] even though 1 inflated data instances still reference it.  This will almost certainly cause a crash next time we operate on one o

    Here is some information regarding the differences between DWarns and DAborts:

    I'd assume that one of the plugin VIs or classes is broken. You can try and clear the compiled object cache to see if that fixes it.

    Alternatively uninstall each plugin until the issue disappears. (start with LVOOP Assistant, I remember having issues with it in LV2015).

  18. It could be open source and still be maintained by NI, as long as they have a way to generate revenue. There is also great potential in the NXG platform, which - as far as I know - is written in C#. Even if LabVIEW is not of interest to millions of people, keep in mind that most open source projects only receive contributions from a small portion of their users.

    The Linux kernel is probably not a good comparison, because it is orders of magnitudes more complex than LabVIEW. Nevertheless, Linux "only" received contributions from approx. 16k developers between 2005 and 2017 - 2017 Linux Kernel Report Highlights Developers’ Roles and Accelerating Pace of Change - Linux Foundation. Compare that to relatively young projects as Visual Studio Code (~1400 contributors), or the .NET Platform (~650 contributors). These are projects with millions of users, but (relatively speaking) few contributors.

    2 hours ago, Neil Pate said:

    The full complement of engineers at NI can barely make any progress into the CAR list, what hope is there for anyone else?

    It depends. Companies might be willing to pay developers to fix issues. Enthusiasts might just dive into the code and open a pull-request with their solution. Some items might not be of particular importance to anyone, so they are just forgotten.

  19. Good selection by @Mefistotelis

    Try to figure out what motivates them (games, machines, information, ...) and help them find the right resources. Try different things, perhaps something sticks. If not, move on to the next.

    Here are two links that can get you started with python in a few minutes.

    • Like 1
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.