Jump to content

Neil Pate

Members
  • Posts

    1,156
  • Joined

  • Last visited

  • Days Won

    102

Posts posted by Neil Pate

  1. 1 hour ago, Billy_G said:

    As I have said in my original post, I tried placing the DLL in most of the locations you enumerated, including a directory tree perfectly matched to the project location, with no luck. All the vendor gave me were one LLB and one DLL in a zip file. I just unpacked and added them to the LabVIEW project. There was no driver installation, no subsequent copying of some DLLs but not others. And the fact that the error messages are about missing functions in that DLL makes me think that it is not related to other DLL dependencies.

    I emailed a zipped up executable to a colleague with a LabVIEW IDE, and I have no idea where he unzipped the files or what his version of LabVIEW was, but he said it loaded without an error message on the first try.

    Have you tried any kind of dependency checker? This can be useful in tracking down why a DLL is not getting loaded with LabVIEW.

    https://github.com/lucasg/Dependencies

    Now, I have come across one super weird issue last year which totally surprised me. I don't think this is the same thing you are experiencing, but see this thread for an explanation https://forums.ni.com/t5/LabVIEW/error-loading-lvanlys-dll-in-Labview-64-bits/td-p/4009772

     

  2. I think you need to start with a simpler example. (And sorry I mistakenly thought you were using RT, you are using an FPGA card in a PC, right?)

    Try and make the most simple scenario you can think of. A simple VI generating a single point of the triangle wave at a time. Transfer this value to the FPGA but wire this to all the analogue outputs at the same time. If you still have a phase shift then something really weird is going on.

    It has been a while since I used a PC based FPGA card, is is possible the FPGA analogue outputs have different configuration somehow in the .lvproj? Like perhaps different filters or something?

  3. I suspect the problem might be you are essentially trying to do single point output from the RT side of things. The property node on the RT might look like it is doing everything at once, but I don't think it actually does update all the values as the same time.

    Normally you would generate a waveform by either doing all the maths on the FPGA itself or using a DMA FIFO or something similar.

    If you are determined to do the signal generation on the RT then try replace your 8 controls that you are using to send the points to the FPGA with a single cluster of 8 elements. This will guarantee the "atomic" transmission and might fix your phase shift.

  4. 5 hours ago, ShaunR said:

    You have obviously never done Agile Development proper then since it is an iterative process which starts with the design step just after requirements acquisition.

    It's not a fear of failure, it is a fast-track route to failure which usually ends up with the software growing like a furry mold.

    But anyway. It's your baby. You know best. Good luck :)

    Better check your sense of humour detector, I think it might be faulty.

    • Like 1
  5. 11 hours ago, ShaunR said:

    Uhuh. Seat-of-your pants design; the fastest way to project failure.

    I believe this now referred to as "Agile" 😉 

    But in all seriousness, not attempting something for fear of failure is not something I have ever really worried about. Also, it is pretty much impossible to fail on 100% hobby/pet-project/learning experience.

    • Haha 2
  6. On 11/20/2020 at 5:30 PM, Matt_AM said:

     

    @ThomasGutzler What do you mean by "Returning different data types from classes of the same instrument type is something you don't want."?  I'm assuming you mean something like use the parent "Power Supply" object for my connector panes and define the child (such as TDK Lambda) during the initialization section of my test.  This way the if I wanted to change the PS from TDK Lambda to say Sorenson, all I'd have to do is change the test's initialization section since all my connector panes are using the Power Supply parent class in their connector pane.  If this is the case, I am doing that already, I may just be bad with my vocabulary.

    I think what Thomas meant is that if you are going to use Dynamic Dispatch (DD) then you are forced to have the same data type as the connector pane of the concrete DD VIs all have to be identical. If you already have something working its probably ok. As an example you cannot have Instrument 1 return an array of DBL and Instrument 2 return an array of SGL for a "Read.vi".

  7. 19 hours ago, UncleFungus46 said:

    Hi Neil.  

    Long time no see.  I am trying to do something similar to what you have done above - I have my Azure hub set up and can talk to it via my Beaglebone ok (trying to open the door of my Chicken Coop).  What I'm not getting is where the Primary Key/Connection String gets inserted using the MQTT (Cowen71).  I seem to connect to the hub ok, just not the device.  Can you offer any assistance?

    Thanks, Rob (FIF1)

     

     

     

    LV MQTT Settings.png

    Hey Rob (UncleFungus🤣🤣)

    I actually moved away from that library in the end as I have my own actor style so wanted something more low level. I now cannot find my old code that was working with this library. I now use the MQTT library from daq.io as it gives me the low level access I need.

    Looking at your screenshot though I think I remember. Just at the bottom you have the cluster with User Name. I am pretty sure the private key string (as copied directly out of Azure) goes there or in one of the elements of that cluster.

    Now, in my production system I have moved away from this technique and am generating a SAS token each time I connect. This might be the wrong thing to do, I have no idea actually but it seems to work! 

  8. I have some information from one of my customers but it's a bit muddled and I am trying to understand it. As it has been described to me,  a USB stick is used to "download" LabVIEW and use it as an "operating system". So obviously there is a bit of a mismatch of vocabulary here or understanding of what LabVIEW is, the closest ideas I have is that this is some kind of Linux Live USB or perhaps running the LabVIEW application directly off the memory stick without installing the RTE.

    Does anyone know if it is possible to run a LabVIEW application without installing the RTE by carefully placing certain files in the right place?

  9. On 10/26/2020 at 9:26 PM, Darren said:

    I've been following this thread with interest, I love SC2. Looking forward to seeing what you come up with next.

    Ok cool, I will pick it up again. 🙂

    Stupidly I am now thinking of moving the game logic to python as this will give me a chance to play with the python integration node in LabVIEW (assuming it is not too slow) and also polish up my python.  The intention is that I can modify it while the game is running.

    • Like 1
  10. RAM is virtually free these days. As much as I love and absolutely strive for efficiency there is just no point in sweating about several MB of memory. There is no silver bullet, if I need to do multiple things with a piece of data it is often so much easier to just make a copy and forget about it after that (so multiple Queues, multiple consumers of a User Event, whatever).

    It is not uncommon for a PC to have 32 GB of RAM, and even assuming we are using only 32 bit Windows that still means nearly 3 GB of RAM available for your application which is actually an insane amount.

    • Like 1
  11. 10 hours ago, LogMAN said:

    I agree. To be clear, it is not my intention to argue against events for sending data between loops. I'm sorry if it comes across that way.

    My point is that the graphical user interface probably doesn't need lossless data, because that would throttle the entire system and I don't know of a simple way to access a subset of data using events, when the producer didn't specifically account for that.

    No need to apologise, it did not come across like that at all.

    There is no rule that says you have to update your entire GUI every time a big chunk of data comes in. Its perfectly ok to have the GUI consumer react to the "data in" type event and then just ignore it if its not sensible to process. Assuming your GUI draw routines are pretty fast then its just about finding the sweet spot of updating the GUI at a sensible rate but being able to get back to processing (maybe ignoring!) the next incoming chunk.

    That said, I normally just update the whole GUI though! I try and aim for about 10 Hz update rate, so things like DAQ or DMA FIFO reads chugging along at 10 Hz and this effectively forms a metronome for everything. I have done some work on a VST with a data rate around 100 MS/s for multiple channels, and I was able to pretty much plot that in close to real-time. Totally unnecessary, yes, but possible.

  12. 13 hours ago, LogMAN said:

    Okay, so this is the event-driven producer/consumer design pattern. Perhaps I misunderstood this part:

    If one consumer runs slower than the producer, the event queue for that particular consumer will eventually fill up all memory. So if the producer had another event for these slow-running consumers, it would need to know about those consumers. At least that was my train of thought 🤷‍♂️😄

    My consumers always (by design) run faster than the producer. At some point any architecture is going to fall over even with the biggest buffer in the world if data is building up anywhere. User Events or queues or whatever, if you need lossless data it is being "built up" somewhere.

  13. 4 hours ago, LogMAN said:

    Doesn't that require the producer to know about its consumers?

    No, not at all. My producers just publish data onto their own (self-created and managed) User Event. Consumers can choose to register for this event if they care about the information being generated. The producer has absolutely no idea who or even how many are consuming the data.

  14. 1 hour ago, LogMAN said:

    That is correct. Since the UI loop can run at a different speed, there is no need to send it all data. It can simply look up the current value from the data queue at its own pace without any impact on one of the other loops.

    How is a DVR useful in this scenario?

    Unless there are additional wire branches, there is only one copy of the data in memory at all times (except for the data shown to the user). A DVR might actually result in less optimized code.

    Events are not the right tool for continuous data streaming.

    • It is much more difficult to have one loop run at a different speed than the other, because the producer decides when an event is triggered.
    • Each Event Structure receives its own data copy for every event.
    • Each Event Structure must process every event (unless you want to fiddle with the event queue 😱).
    • If events are processed slower than the producer triggers them, the event queue will eventually use up all memory, which means that the producer must run slower than the slowest consumer, which is a no-go. You probably want your producer to run as fast as possible.

    Events are much better suited for command-like operations with unpredictable occurrence (a user clicking a button, errors, etc.).

    I exclusively use events for messages and data,even for super high rate data. The trick is to have multiple events so that processes can listen to just those they care about.

    • Like 1
  15. 1 hour ago, brownx said:

    If it's that complex I rather do it in C++, few days (already done such stuff so not starting from 0) and just export the thing to Labview :)) But the guys following me will go crazy in case any change is needed :D

    The HAL is not that complex in my case - just think about a set of booleans and numeric values - the rest is taken care of the HW, no need implement a huge abstraction layer on Labview ...

    Classes, multiple inheritance, class factory etc. is not a problem - I am used with them. What I am not that familiar are the notifiers but with LogMAN and NI examples it will be fine.

    I've done complex HAL systems in C++, home automation/iot server with modules loadable by dll or jar or activex or whatever, everything abstract including the programming language too, scalability is high, etc. etc. - this is what You mean? 

    If yes I probably don't need that complex - there will be less than 10 types of HW supported and the consumer of the data will exactly know what hardware he is using too, so there is no need of that complexity. 

    if you can get this all done in C++ in a few days I am mighty impressed 🙂

  16. 10 minutes ago, brownx said:

    For me a subvi is equivalent with a functioncall in a scripted language - this is how Teststand uses them too (just see a line of script representing a vi with a bunch of inputs and outputs representing the data in and out, not even see the UI). 
    But I get what You mean ... 

    I don't care too much about performance since my application is a slow one with small amount of data - what is complex is the distribution of the data, the multithreaded access and the need for scalability.
    This is why I just thought there should be a better way than  "arrays of arrays of arrays or classes and references" which is a pain to maintain and to scale.

    Ok - let's talk about my example - it's basically a IO driver:
    - I need to support a set of hardware, each with N inputs and M outputs (don't matter what kind, they will be many)
    - it has to be easily scaled to add more type and more from the same type
    - has to be multithreaded and permit multiple instances of the same HW
    - data has to be accessed from a single standalone vi call (hard to explain this unless You guys are using Teststand which is a bit different than having one monolith Labview app)

    How You guys would implement it? Don't need complete examples, just a few keywords from where I can get a new viewpoint ...

    For now what I see is a set of classes, holding only the class in array (with not even that is needed), using one persistent vi per instance and notifier to get the data from it and queue to send commands to it (this will replace the arrays of arrays of arrays I did not liked anyway). Do You see anything wrong with it or have a completely new idea? 

     

    Well now you are really getting in the need for a proper architecture with HAL and clonable/re-entrant actors and stuff. Not something that can be easily described in a few sentences.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.