Jump to content

Neil Pate

Members
  • Content Count

    888
  • Joined

  • Last visited

  • Days Won

    65

Posts posted by Neil Pate

  1. On 10/26/2020 at 9:26 PM, Darren said:

    I've been following this thread with interest, I love SC2. Looking forward to seeing what you come up with next.

    Ok cool, I will pick it up again. ūüôā

    Stupidly I am now thinking of moving the game logic to python as this will give me a chance to play with the python integration node in LabVIEW (assuming it is not too slow) and also polish up my python.  The intention is that I can modify it while the game is running.

    • Like 1
  2. RAM is virtually free these days. As much as I love and absolutely strive for efficiency there is just no point in sweating about several MB of memory. There is no silver bullet, if I need to do multiple things with a piece of data it is often so much easier to just make a copy and forget about it after that (so multiple Queues, multiple consumers of a User Event, whatever).

    It is not uncommon for a PC to have 32 GB of RAM, and even assuming we are using only 32 bit Windows that still means nearly 3 GB of RAM available for your application which is actually an insane amount.

    • Like 1
  3. 10 hours ago, LogMAN said:

    I agree. To be clear, it is not my intention to argue against events for sending data between loops. I'm sorry if it comes across that way.

    My point is that the graphical user interface probably doesn't need lossless data, because that would throttle the entire system and I don't know of a simple way to access a subset of data using events, when the producer didn't specifically account for that.

    No need to apologise, it did not come across like that at all.

    There is no rule that says you have to update your entire GUI every time a big chunk of data comes in. Its perfectly ok to have the GUI consumer react to the "data in" type event and then just ignore it if its not sensible to process. Assuming your GUI draw routines are pretty fast then its just about finding the sweet spot of updating the GUI at a sensible rate but being able to get back to processing (maybe ignoring!) the next incoming chunk.

    That said, I normally just update the whole GUI though! I try and aim for about 10 Hz update rate, so things like DAQ or DMA FIFO reads chugging along at 10 Hz and this effectively forms a metronome for everything. I have done some work on a VST with a data rate around 100 MS/s for multiple channels, and I was able to pretty much plot that in close to real-time. Totally unnecessary, yes, but possible.

  4. 13 hours ago, LogMAN said:

    Okay, so this is the event-driven producer/consumer design pattern. Perhaps I misunderstood this part:

    If one consumer runs¬†slower than the producer, the event queue for that particular consumer will eventually fill up all memory.¬†So if¬†the producer had¬†another event for these slow-running consumers, it would need to know about those consumers. At least that was my train of thought¬†ūü§∑‚Äć‚ôāÔłŹūüėĄ

    My consumers always (by design) run faster than the producer. At some point any architecture is going to fall over even with the biggest buffer in the world if data is building up anywhere. User Events or queues or whatever, if you need lossless data it is being "built up" somewhere.

  5. 4 hours ago, LogMAN said:

    Doesn't that require the producer to know about its consumers?

    No, not at all. My producers just publish data onto their own (self-created and managed) User Event. Consumers can choose to register for this event if they care about the information being generated. The producer has absolutely no idea who or even how many are consuming the data.

  6. 1 hour ago, LogMAN said:

    That is correct. Since the UI loop can run at a different speed, there is no need to send it all data. It can simply look up the current value from the data queue at its own pace without any impact on one of the other loops.

    How is a DVR useful in this scenario?

    Unless there are additional wire branches, there is only one copy of the data in memory at all times (except for the data shown to the user). A DVR might actually result in less optimized code.

    Events are not the right tool for continuous data streaming.

    • It is much more¬†difficult to have one¬†loop run at a different speed than the other, because¬†the producer decides when an event is¬†triggered.
    • Each Event Structure receives its own data copy for every event.
    • Each Event Structure must process every event (unless you want to fiddle with the event queue¬†ūüėĪ).
    • If events are processed slower than the producer triggers them, the event queue will eventually use up all memory, which means that¬†the¬†producer must¬†run slower than the slowest consumer, which is a no-go. You probably want your producer to run as fast as possible.

    Events are much better suited for command-like operations with unpredictable occurrence (a user clicking a button, errors, etc.).

    I exclusively use events for messages and data,even for super high rate data. The trick is to have multiple events so that processes can listen to just those they care about.

    • Like 1
  7. 1 hour ago, brownx said:

    If it's that complex I rather do it in C++, few days (already done such stuff so not starting from 0) and just export the thing to Labview :)) But the guys following me will go crazy in case any change is needed :D

    The HAL is not that complex in my case - just think about a set of booleans and numeric values - the rest is taken care of the HW, no need implement a huge abstraction layer on Labview ...

    Classes, multiple inheritance, class factory etc. is not a problem - I am used with them. What I am not that familiar are the notifiers but with LogMAN and NI examples it will be fine.

    I've done complex HAL systems in C++, home automation/iot server with modules loadable by dll or jar or activex or whatever, everything abstract including the programming language too, scalability is high, etc. etc. - this is what You mean? 

    If yes I probably don't need that complex - there will be less than 10 types of HW supported and the consumer of the data will exactly know what hardware he is using too, so there is no need of that complexity. 

    if you can get this all done in C++ in a few days I am mighty impressed ūüôā

  8. 10 minutes ago, brownx said:

    For me a subvi is equivalent with a functioncall in a scripted language - this is how Teststand uses them too (just see a line of script representing a vi with a bunch of inputs and outputs representing the data in and out, not even see the UI). 
    But I get what You mean ... 

    I don't care too much about performance since my application is a slow one with small amount of data - what is complex is the distribution of the data, the multithreaded access and the need for scalability.
    This is why I just thought there should be a better way than  "arrays of arrays of arrays or classes and references" which is a pain to maintain and to scale.

    Ok - let's talk about my example - it's basically a IO driver:
    - I need to support a set of hardware, each with N inputs and M outputs (don't matter what kind, they will be many)
    - it has to be easily scaled to add more type and more from the same type
    - has to be multithreaded and permit multiple instances of the same HW
    - data has to be accessed from a single standalone vi call (hard to explain this unless You guys are using Teststand which is a bit different than having one monolith Labview app)

    How You guys would implement it? Don't need complete examples, just a few keywords from where I can get a new viewpoint ...

    For now what I see is a set of classes, holding only the class in array (with not even that is needed), using one persistent vi per instance and notifier to get the data from it and queue to send commands to it (this will replace the arrays of arrays of arrays I did not liked anyway). Do You see anything wrong with it or have a completely new idea? 

     

    Well now you are really getting in the need for a proper architecture with HAL and clonable/re-entrant actors and stuff. Not something that can be easily described in a few sentences.

  9. Just now, brownx said:

    I C ...

    Well in my case the UI is almost nonexistent - I have to do under layers of Teststand scripts based on different hardware, the result is never a UI (if You don't call a PASSED/FAILED and the test log a ui with some minimal interactions like start/stop/exit etc.).

    So I don't think in UI, the data I am working with is almost never shown (maybe on the debug UI only where I want to see some details if something goes wrong :) )

    You don't really need to worry about performance of the GUI anyway until you are getting into real-time updating of graphs with hundreds of MB of data. Even then it can be done if you are careful.

  10. 25 minutes ago, brownx said:

    Hm - I definitelly did this mistake ... Even classes have their private data in clusters formed from bools, etc. - same sort of elements You get on UI.

    Can You give me a good Labview example which shows how to use real data vs UI elements? 

    I did Core1 and 2 and this thing was never mentioned (except functional variable) and I still have the feeling that I miss something - this could be the reason I still kinda don't like Labview :)

    You do use the same "normal" controls in the class private data as you would in your GUI. This is 100% ok and you can use whatever you like as they will never be visible at runtime, they are just a visual representation of your data types.

    What you choose to show on the GUI is totally unrelated to what data you choose to have in the classes.

     

    • Like 1
  11. 1 minute ago, ensegre said:

    This one is simply great as a GUI for that. If you're looking for a programmatic way of monitoring running clones, I guess you simply have to look under the hood.

    that is nice but I found it way too heavyweight to use regularly. My debugger is built into my application and can be used even in the exe

  12. A simple way I have found which works quite nicely is to get the clone to register itself with some kind of repository (FGV). I then have another tool which reads from the list of running VIs and can do stuff like display status, open block diagrams etc.

    image.png.f3734af87024d9873bb21785e79d22c8.png

    image.png.9cfe8870551151c49b3b588d32bd7bfc.png

  13. 5 hours ago, drjdpowell said:

    If only there was a

    
    > git fast-forward <new branch>

    One step, and no scary "-force" or "delete" involved.

    Have you tried GitKraken? I know every harps on about how the only way to use git is from the command line but I don't actually think that is a good way to get to know something as complicated as git. Sure, move onto to the command line but don't start there. Learning a new VCS should happen slowly and mostly painlessly, who has the time to spend getting intimately acquainted with a new tech that does not actually help get the bill-paying project out the door?

  14. 8 hours ago, Mode_Locked said:

    Thanks for taking a stab at this, LogMAN!

    Unfortunately, that did not work. Looks like there is something special about 'WaitForNextEvent' that recognizes system level WMI events that LabVIEW is not able to see. I'm no WMI expert, but I looked at this in terms of dataflow by highlighting execution on the block diagram. It appears that the data just flows through 'Reg Event Callback' node without waiting for the event as soon as the program is run.

    I'm going to assume that there is a rather low probability of you having a Toshiba USB stick :)

    I modified the code to where it detects the launch of Notepad. This way, you could test it on your system if interested.

    Again, thank you for your help.

     

     

    WMI_NotepadLaunchEvent_withTimeOut.vi 13.21 kB · 1 download

    Reg Event Callback is not the thing that waits, it is (as the name implies) merely the registration. It should return immediately. Did you create the callback VI?

  15. Totally long shot this, but could it be a path length issue? I have not tried to do what you are trying to do, but have in the past seen builds sporadically fail due to the length of the path of some VI exceeding the windows limit of 260 characters (or something like that). As soon as I tried to build into something like c:\temp the problem wen away!

    This was years ago I have to say, and in more recent versions of Windows it is possible to work around this with a small change to the registry.

  16. For what it is worth, the performance of regular DMA FIFOs is quite impressive. Recently I worked with a VST and had multiple channels at 120 MHz data rate and I was able to read these from the FPGA, do some processing and stream continuously to a RAID array at the full rate.

  17. Yes I probably should have used that. Instead it was not too tricky to just read it from the OS directly (at least the easy ones like memory and disk usage)

    image.png.e311dcc55cd6140ac5baa7fbc186430a.png

    image.png.76b3cb3a4859182e37e52dc500e1f0ad.png

  18. 2 hours ago, drjdpowell said:

    The problem I had is that submodules are connected to a specific commit, rather than branch, so when one checks them out originally one must remember to manually create a branch, else if you commit changes they are a "detached head", which is then fixable, but a pain.  

    I find Git to be very much "'Oh, you should have used the "engage safety" and "don't_point_at_foot" options when you called the "git new_gun" command'.

     

    hmm, that is not my experience with GitKraken. I could work without creating a branch and it would just tell me that I needed to commit the submodule changes.

    I did get the detatched head a few times though.

  19. 2 hours ago, drjdpowell said:

    I've been experimenting with Git Submodules, as a way to deal with a large repo that supports multiple different Test Stations with lots of common code.  It again demonstrates Git is powerful, but also how horrible the UX design is.  It's like I want to do an obvious thing, so I do the obvious action, but because I missed the non-obvious step needed first, I am now in a state where I have to spend 10 minutes coming up with a series of multiple actions that will undo the damage.

    I have found submodule interation in GirKraken pretty self-explanatory actually. I have of course managed to screw things up a few times but have so far been able to recover my mistakes.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.