Jump to content

drjdpowell

Members
  • Posts

    1,964
  • Joined

  • Last visited

  • Days Won

    171

Posts posted by drjdpowell

  1. 7 hours ago, Neil Pate said:

    Would love to see your more sophisticated testing, especially the holding state data.

    Not sure it is sophisticated, but here is a simple implementation of an Example that increments a counter (state variable), with an error thrown if count exceeds a MaxCount (state variable).  The py Module is called from a LVOOP Object that encapsulates the Python instance.  The Class has methods that call the corresponding py-module functions.

    Python Incrementer.zip

    py module is this:

    # Demo of a Python Module, to be called by LabVIEW
    
    # Notes on Errors:
    #  Return errors in a form matching the LabVIEW Error Cluster.  Examples:
    #    Error = (False,0,"") # No Error
    #    Error = (True,1,"MySource<Err>MyDescription") # 1==Error in input
    
    import time  # used for sleep()
    
    # Example of having global data (global just to this module):
    Count = 0
    MaxCount = 10
    
    def Initialize():
            Error = (False,0,"") # No Error (change to indicate an error)
            return(Error,)
    
    def GetCount():
            global Count
            return (Count);
    
    def Increment():
            Error = (False,0,"") # No Error (change to indicate an error)
            global Count
            global MaxCount
            if Count >= MaxCount:
                    Error=(True,1,"Increment<Err>Can't Increment as at MaxCount")
            else:
                    Count += 1
                    time.sleep(.3)  # Wait, to count slow enought to see
            return (Error,Count);
    
    def SetMaxCount(NewMaxCount):
            Error = (False,0,"") # No Error (change to indicate an error)
            global MaxCount
            MaxCount = NewMaxCount
            return(Error,)
    

     

  2. On 3/6/2021 at 6:25 AM, Neil Pate said:

    So again I am definitely not an expert but my understanding is that a virtual environment is like a small sandboxed instance where you can install packages and generally work without affecting all other python code on the system. I guess it's like nuget.

    If you don't use a virtual environment when you install packages it affects the global python installation on your machine.

    Seems like pretty sensible stuff (and hopefully what Project Dragon will do for LabVIEW). However I have not been able to get the native python node to work with a virtual environment.

    Though I see the value in that, I don't think that is a significant advantage in my case.  Actually a disadvantage, as my client is already overburdened with "too many things" complexity and would benefit from a standardized python environment.  I also want to minimize the complexity of install on a fresh computer, and I think the native node only requires python itself.

     

  3. For example, my Interface in JSONtext (if JSONtext were based in 2020 rather than 2017) would just implement "To JSON" and "From JSON" methods, whose default implementations would just use the standard flattening of the class in a JSON string.   I've have been holding off implementing this as a parent class because I was waiting for interfaces.   Should I just go ahead and make this a Class?  Note that a User could not use your Lineator to actually do the conversion to JSON, as they cannot inherit off your Lineator if they are already inherited off my Class.  If Interfaces were used, they could use your Lineator to produce JSON inside my JSONtext subVIs.

  4. On 3/4/2021 at 12:45 AM, Aristos Queue said:

    That would still be a bug because a child doesn't know whether its parent will remain without private data permanently. 

    Just top-level parent classes that inherit from LVObject, then?  If LVObject gets private data, even an inheritance-based serializer will fail.

    On 3/4/2021 at 12:45 AM, Aristos Queue said:

    That's why the inheritance only supplies the ability to serialize. It says nothing about the format/structure/etc. 

    I haven't looked at your "lineator" in a long time, but I feel there must have been some architectural choices made that other developers may have reasons for making other choices.  Thus, there is a need to be able to support more than one type of serializer in the same class hierarchy.  Thus interfaces, maybe?

  5. I am just starting on trying to be able to use Python code from a LabVIEW application (mostly for some image analysis stuff).  This is for a large project where some programmers are more comfortable developing in Python than LabVIEW.  I have not done any Python before, and their seem to be a bewildering array of options; many IDE's, Libraries, and Python-LabVIEW connectors.  

    So I was wondering if people who have been using Python with LabVIEW can give their experiences and describe what set of technologies they use.

    • Like 2
  6. On 3/1/2021 at 5:29 PM, Aristos Queue said:

    Using interfaces would be a BUG. You cannot add serialization to a class if all ancestors do not support it or you end up with an insane class.

    You can add them to classes whose ancestors have no data (such as LVObject).   You might have a "Flatten to JSON" interface and a "Flatten to XML" interface and a "Store in My Special Format" interface, and can decide which formats to implement.  Inheritance only works once.

  7. I would guess it is a compiler optimization, where the terminal points to the same memory location as the output tunnel.  It is arguable that some breakpoint weirdness is better than the forced memory copies just in case a breakpoint might be added at some point (given the huge number of places a breakpoint could be added).  

    Alternately, it could be something to do with "chunking"; dividing a VI into executable chunks.  I wouldn't be surprised if a breakpoint can only pause between chunks.  Exiting the loop and writing to the indicator terminal could be one chunk.

  8. Saving your images directly in a binary file would probably be the fastest way to save.  Not the best format for long-term storage, but if you only keeping them temporarily so they can be read and compressed later then that doesn't matter.

    I would first consider your Compression step; can you make that any faster?  You only need it 33% faster to get 150 FPS.  Is this your own developed compression algorithm?  How much CPU does it use when it is running?  If only one CPU then there are parallelization options (such as compressing multiple images in parallel).

    BTW: do you really have 8-byte pixels?

  9. Best to state performance numbers in per operation.  I assume these are for 5528 images, so the "Drop Table" is 50ms each time?   Try eliminating the Drop Table and instead just delete the row.  If that works then your remaining dominant step is the 10ms/image for Compression.

    I think your initial mistake was to go, "Since we want to speed up the process, <we do extra steps and make things extra complicated and in parallel modules in hopes that will be faster>."  Better to have said, "Since we want to speed up the process, we will make a simple-as-possible prototype VI that we can benchmark."  That prototype would be a simple loop that gets data, compresses, and saves to db.

  10. 2 hours ago, Francois Normandin said:

    Whenever it's been used in a project that I knew would persist more than for a simple demo, I've always wrapped the IMAQ API into an image class that handles the name generation with GUIDs to avoid collisions. Sometimes I'll add a singleton registry to keep track of references, but my IMAQ-flavored apps have generally been of low complexity, so I typically maintain a list of objects in the process' private data. When I need to leak the images across multiple processes, it is generally a delegation pattern, and the caller is still responsible for the reference's lifetime.

    I would much rather than IMAQ references behaved the same as other LabVIEW references, like Queues.

    • Like 1
  11. As a side question, how do people deal with the non-standard way that IMAQ image references work (alway globally named; don't clean up when owning VI goes idle)?

    For background, I am currently trying to get a large amount of non-reentrant image analysis code to work reentrantly, and have to deal with preventing one clone of a VI modifying an image inadvertently shared with another.  I am attacking the problem by auto-generating image names based on call site (ie, a pre-allocate clone that uses its own clone id in the image name):

    120896225_2021-02-1717_47_58-ContextHelp.png.426eac48eec079e971432611efca9131.png

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.