Jump to content

LogMAN

Members
  • Posts

    655
  • Joined

  • Last visited

  • Days Won

    70

Everything posted by LogMAN

  1. Edit: Nevermind, I misread your post ๐Ÿ˜„ In order for LabVIEW to know about all dependencies in a project, it has to traverse all its members. Because this operation is slow, it probably keeps references open for as long as possible. I'm not sure why it would unload the assembly in standalone, but that is how it is.
  2. There is something strange about how this library is managed. For some reason it seems to work if the VI is part of a project, but not as a standalone VI. Standalone As part of a project I did not reproduce the entire example, so it might fail. At least you can try adding your VI to a project and see if it works (make sure the assembly is listed under Dependencies).
  3. I was wondering why it wasn't documented, now I know ๐Ÿ˜„
  4. 1+3 makes the most sense in my opinion. And it should be limited to a single SQL statement (or multiple statements if they use absolute parameter bindings like "?NNN"). Perhaps it also makes sense to apply that limitation to Execute SQL, because it can get very confusing when executing multiple statements in a single step with parameters that include arrays. For example, what is the expected outcome of this? I could be mislead into thinking that the user name only applies to the first SQL statement and the array to the second, or perhaps that it uses the same user name multiple times while it iterates over each element of the array, but that is not how it works. Right now it produces empty rows ๐Ÿ˜Ÿ With my fix above, it will produce exactly one row ๐Ÿคจ By executing the statement over-and-over again until all parameters are used up (processed sequentially), it will insert data into wrong places ๐Ÿ˜ฑ In my opinion it should only accept a) an array of any type, each element of which must satisfy all parameter bindings. b) a cluster of any type, which must satisfy all parameter bindings. c) a single element of any type. In case of a or b, subarrays are not allowed (because a subarray would imply 3D data, which is tricky to insert into 2D tables). That way I am forced to write code like this, which is easier to comprehend in my opinion: This is the expected output: Here is what I did to Execute SQL (in addition to the fix mentioned before):
  5. I think you already answered your own question. Each statement only affects a single row, which means you have to wrap arrays in loops. Don't forget that Execute SQL can handle multiple statements at once. This is shown in one of the examples: Note that there are multiple statements that utilize the same input cluster. You could use the Array To Cluster function to simplify the task: You are on the right path. In this case Execute SQL will report an error because only arrays of clusters are allowed: Please take the following section with a grain of salt, I make a few assumptions based on my own findings. @drjdpowell It would be great to have an example of how Arrays of Clusters are supposed to work. I opened the source code and it looks like it is supposed to take an array of cluster and iterate over each element in the order of appearance (as if it was a single cluster). So these will appear equivalent (The "Text" element of the second cluster is used for the 'Reason' field in this example): However, in the current version (1.12.2.91) this doesn't work at all. The table returns empty. I had to slightly change Parse Parameters (Core) to get this to work: Before: Note that it parses the ArrayElement output of the GetArrayInfo VI, which only contains the type info but no data. After: This will concatenate each element in the array as if it was one large cluster. Perhaps I'm mistaken on how this is supposed to work?
  6. +1 for flushing the event queue. Here is another solution that involves unregistering and re-registering user events. Whenever the event fires, the event registration is destroyed. At the same time it sets the timeout for the timeout case, which will re-register the event and disable the timeout case again. This could be useful in situations where the consumer runs much (order of magnitudes) slower than the producer, in which case it makes sense to reconstruct the queue every time the consumer desires a new value. I haven't done any benchmarks, nor used it in any real world application so far, but it works.
  7. My consumers also tend to update the whole GUI if it doesn't impact the process negatively (it rarely does). I was looking for a solution that doesn't require each consumer to receive their own copy in order to save memory. But as @Bhaskar Peesapati already clarified, there are multiple consumers that need to work lossless, which changes everything. Discarding events will certainly prevent the event queue to run out of memory. I actually have a project where I decided to use the Actor Framework to separate data processing from UI and which filters UI update messages to keep it at about 10 Hz. Same thing, but with a bunch of classes. I'm pretty sure there are not many ways to write more code for such a simple task ๐Ÿ˜…
  8. I have only one project that uses multiple DVRs to keep large chunks of data in memory, which are accessed by multiple concurrent processes for various reasons. It works, but it is very difficult to follow the data flow without a chart that explains how the application works. In many cases there are good alternatives that don't require DVRs and which are easier to maintain in the long run. The final decision is yours, of course. I'm not saying that they won't work, you should just be aware of the limitations and feel comfortable using and maintaining them. For sure I'll not encourage you to use them until all other options are exhausted. I agree. To be clear, it is not my intention to argue against events for sending data between loops. I'm sorry if it comes across that way. My point is that the graphical user interface probably doesn't need lossless data, because that would throttle the entire system and I don't know of a simple way to access a subset of data using events, when the producer didn't specifically account for that.
  9. This will force your consumers to execute sequentially, because only one of them gets to access the DVR at any given time, which is similar to connecting VIs with wires. You could enable Allow Parallel Read-only Access, so all consumers can access it at the same time, but then there will be could be multiple data copies. Have you considered sequentially processing? Each consumer could pass the data to the next consumer when it is done. That way each consumer acts as a producer for the next consumer until there is no more consumer. It won't change the amount of memory required, but at least the data isn't quadrupled and you can get rid of those DVRs (seriously, they will hunt you eventually).
  10. Okay, so this is the event-driven producer/consumer design pattern. Perhaps I misunderstood this part: If one consumer runs slower than the producer, the event queue for that particular consumer will eventually fill up all memory. So if the producer had another event for these slow-running consumers, it would need to know about those consumers. At least that was my train of thought ๐Ÿคทโ€โ™‚๏ธ๐Ÿ˜„
  11. Doesn't that require the producer to know about its consumers? You don't. Previewing queues is a lossy operation. If you want lossless data transfer to multiple concurrent consumers, you need multiple queues or a single user event that is used by multiple consumers. If the data loop dequeues an element before the UI loop could get a copy, the UI loop will simply have to wait for the next element. This shouldn't matter as long as the producer adds new elements fast enough. Note that I'm assuming a sampling rate of >100 Hz with a UI update rate somewhere between 10..20 Hz. For anything slower, previewing queue elements is not an optimal solution.
  12. That is correct. Since the UI loop can run at a different speed, there is no need to send it all data. It can simply look up the current value from the data queue at its own pace without any impact on one of the other loops. How is a DVR useful in this scenario? Unless there are additional wire branches, there is only one copy of the data in memory at all times (except for the data shown to the user). A DVR might actually result in less optimized code. Events are not the right tool for continuous data streaming. It is much more difficult to have one loop run at a different speed than the other, because the producer decides when an event is triggered. Each Event Structure receives its own data copy for every event. Each Event Structure must process every event (unless you want to fiddle with the event queue ๐Ÿ˜ฑ). If events are processed slower than the producer triggers them, the event queue will eventually use up all memory, which means that the producer must run slower than the slowest consumer, which is a no-go. You probably want your producer to run as fast as possible. Events are much better suited for command-like operations with unpredictable occurrence (a user clicking a button, errors, etc.).
  13. Welcome to Lava! The loops in both of your examples are connected with Boolean wires, which forces them to execute sequentially. This is certainly not what you want. Also, both examples are variations of the Producer/Consumer design pattern. You probably want to store data lossless, so a queue is fine. Just make sure that the queue does not eat up all of your memory (storing data should be faster than capturing new data). Maybe use TDMS files for efficient data storage. Use events if you have a command-like process with multiple sources (i.e. the UI loop could send a command to the data loop to change the output file). Displaying data is very slow and resource intensive. You should think about how much data to display and how often to update. It is probably not a good idea to update 24 MB worth of data every second. A good rule of thumb is to update once every 100 ms (which is fast enough to give the user the impression of being responsive) and only show the minimum amount of data necessary. In this case you could utilize the data data queue and simply check for the next value every 100 ms, then decimate and display the data. Here is an example:
  14. You need to take time zones into account (UTC+14:00 in your case). By default the Scan From String function returns time stamps in local time if you use the "%<>T" format specifier. This is mentioned under section Format Specifier Examples, here: https://zone.ni.com/reference/en-XX/help/371361R-01/lvconcepts/format_specifier_syntax/ You'll get the right response if you use the "%^<>T" format specifier: "%[+-]%^<%H:%M>T"
  15. You have to install at least the LabVIEW Runtime Environment for your particular version of LabVIEW and any dependencies. Otherwise you'll receive many error messages. The best way to go about it is to add a build specification for an installer to your LabVIEW project, with which you can distribute your application and any dependencies in one package. It will produce an installer for your application. Here is a KB article that explains the process: https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019PV6SAM Additionally, you can include MAX configurations from your machine and have them automatically installed on the target machine. Here is a KB article that explains the process for custom scales: https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019MvuSAE There is probably a better source for this, but ni.com is down for maintenance right now... ๐Ÿ˜ฑ
  16. Named queues will only work in the same instance of LabVIEW. If there are two instances, those queues are isolated from each other (labview1 and labview2 don't share their address space), which means you need inter-process communication. @drjdpowell If I remember correctly, the Messenger Library has built-in network capabilities, right?
  17. From what we have discussed so far, the Messenger Library certainly seems to be a perfect fit. It'll provide you with the infrastructure to run any number (and type) of workers and communicate with them in a command-like fashion. It is, however, a much more advanced framework than the simple message handler from my examples. As such, it will take more time to learn and use properly. As someone who enjoys the company of technicians who get scared by things like "classes" (not to mention inheritance ๐Ÿ™„), I strongly suggest to evaluate the skill levels of your maintainers before going too deep into advanced topics. If nobody has the skills to maintain the solution, you could just as well do it in C++. Perhaps you can include them in the development process? This will make the transition much easier and they know what is coming for them. If they also do some programming, they have nobody to blame ๐Ÿ˜‰ @drjdpowell already mentioned his videos on YouTube. I really suggest you watch them in order to understand the capabilities of the Messenger Library Here is also a link with more information for the message handler in my examples (sorry no video, +1 for the Messenger Library): http://www.ni.com/tutorial/53391/en/
  18. You are trying to optimize something that really isn't a bottleneck. Even if each bit was represented by a 8-bit integer, the total size of your data is less than 200 Bytes per hardware. Even with 100 devices (hardware) only 20 KB of memory is needed for all those inputs and outputs (analog and digital). In the unlikely event that there are 1000 consumers at the same time, each of which have their own copy, it will barely amount to 20 MB... As a C/C++ programmer I feel the urge for memory management, but this is really not something to worry about in LabVIEW, at least not until you hit the upper MB boundary. It might seem easier at first glance, but now all your consumers need to know the exact order of inputs and outputs (by index), which means you need to update every consumer when something changes. If you let the worker handle it, however, (i.e. with a lookup table) consumers can simply "address" inputs and outputs by name. That way the data structure can change independently. You'll find this to be much more flexible in the future (i.e. for different hardware configurations). I'd probably use another worker that regularly (i.e. every 100 ms) polls the state of the desired input and sends the stop signal if needed. That, and the fact that the worker has to poll continuously even if there is no consumer. It is also not possible to add new features to such a worker, which can be problematic in case someone needs more features... Suggestion: Keep them in a separate list as part of the worker. For example, define a list of interrupt IOs (addresses) that the worker keeps track of. On every cycle, the worker updates the interrupt state (which is a simple OR condition). Consumers can use a special "read interrupt state" command to get the current state of a specific interrupt (you can still read the regular input state with the other command). When "read interrupt state" is executed, the worker resets the state. Now that I think about it, there are quite a few things I might just use in my own I/O Server... ๐Ÿ˜
  19. Yes, either notifier or queue. You can store the notifier reference, or give the notifier a unique name (i.e. GUID), which allows you to obtain an existing notifier by name if it already exists in the same address space (application). Here is an example using notifiers: Queues work the same way. Either your notification always contains the value for "read IO1", in which case the latest value also contains it, or you need to inform the worker about which channel to read. For example, by sending a message to your worker that includes the desired channel name, as well as a reply target. For things like this, the Queued Message Handler template (included with LabVIEW) or the Messenger Library are probably worth looking into. How much data are we talking about? Yes, there is some copying going on, but since the data is requested on-demand, the overall memory footprint should be rather small because memory is released before the next step starts. If you really need to gather a lot of data at the same time (i.e. 200 MB or higher), there is the Data Value Reference, which gives you a (thread safe) reference to the actual data. DVRs, however, should be avoided whenever possible because they limit the ability of the compiler to optimize your code. Not to mention breaking dataflow, which makes the program much harder to read...
  20. Sorry, I'm not familiar with TestStand. I'd assume that there is some kind of persistent state and perhaps a way to keep additional code running in the background, otherwise it wouldn't be very useful. For example, it should be possible to use Start Asynchronous Call to launch a worker that runs parallel to TestStand and which can exchange information via queues/notifiers whose references are maintained by TestStand (and that are accessible by step scripts). In this case, there would be one step to launch the worker, multiple steps to gather data, and one step to stop the worker. Maybe someone with more (any) experience in TestStand could explain if and how this is done.
  21. Oh I see, my first impression was that the issue is about performance, not architecture. Here are my thoughts on your requirements. I assume that your hardware is not pure NI hardware (in which case you can simply use DAQmx). Create a base class for the API (LabVIEW doesn't have interfaces until 2020, a base class is the closest thing there is). It should have methods to open/close connections and read data. This is the Read API. For each specific type of hardware, create a child class and implement the driver-specific code (TCP/IP, UDP, serial). Create a factory class, so that you can create new instances of your specific drivers as needed. The only thing you need to work out is how to configure the hardware. I can imagine using a VISA Resource Name (scroll down to VISA Terminology) for all drivers, which works unless you need to use protocols that VISA doesn't support (TCP/IP, UDP, and serial are supported though). Alternatively create another base class for your configuration data and abstract from there. Of course, the same should be done for the Write API. The easiest way is to have two methods, one to read analog values and one to read digital values. Of course, hardware that doesn't support one or the other will have to return sensible default values. Alternatively, have two specific APIs for reading analog/digital values. However, due to a lack of multiple inheritance in LabVIEW (unless you use interfaces in 2020), hardware that needs to support both will have to share state somehow. It makes sense to implement this behavior as part of the polling thread and have it cache the data, so that consumers can access it via the API. For example, a device reads all analog and digital values, puts them in a single-element queue and updates them as needed (dequeue, update, enqueue). Consumers never dequeue. They only use the "Preview Queue Element" function to copy the data (this will also allow you to monitor the last known state). This is only viable if the dataset is small (a few KB at most). Take a look at notifiers. They can have as many consumers as necessary, each of which can wait for a new notification (each one receives their own copy). There is also a "Get Notifier Status" function, which gives you the latest value of a notifier. Keep in mind, however, that notifiers are lossy.
  22. Arrays of clusters and arrays of classes will certainly have a much higher memory footprint than an array of any primitive type. The amount of data copies, however, depends on your particular implementation and isn't affected by the data type. LabVIEW is also quite smart about avoiding memory copies: https://labviewwiki.org/wiki/Buffer_Allocation Perhaps, if you could show the particular section of code that has high memory footprint, we could suggest ways to optimize it. I don't want to be mean, but this particular approach will result in high CPU usage and low throughput. What you build requires a context switch to the UI thread on every call to one of those property nodes, which is equivalent to simulating keyboard presses to insert and read large amounts of data. Not to mention that it forces LabVIEW to copy all data on every read/write operation... Certainly not the recommended way to do it ๐Ÿ˜ฑ You don't need to sync read and write for property nodes, they are thread safe. Do you really need to keep all this data in memory in large chunks? It sounds to me as if you need to capture a stream of data, process it sequentially, and output it somewhere else. If that is the case, perhaps the producer/consumer template, which comes with LabVIEW, is worth looking into.
  23. If we are talking about automating factories, isn't Factorio the game of choice, rather than Minecraft? ๐Ÿ˜‹
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.