Jump to content

LogMAN

Members
  • Posts

    662
  • Joined

  • Last visited

  • Days Won

    70

Everything posted by LogMAN

  1. @ensegre is right. Use the profiler to visualize buffer allocations. It shows black dots where a copy occurs.
  2. Yes that works. It creates a copy of the unbundled value (requires more memory). The IPE Structure avoids this copy by overwriting the original value. This is also explained in the docs: Unbundle / Bundle Elements - NI Please note that my example is simple enough that the compiler can probably optimize it on its own. Here is another example that the compiler cannot optimize on its own because of the Select:
  3. The In-Place Element (IPE) Structure can be used for memory optimization purposes. It is most useful for large datasets. For example, to modify the value of a cluster in-place (hence the name). Here is a simple example: This is functionally equivalent to using `Unbundle By Name` followed by `Bundle By Name` but it allows the compiler to avoid a memory copy for the OK value and increment it in-place. Note that there are different kinds of border nodes that you can use on the IPE Structure: In Place Element Structure - NI In your example, the Data Value Reference Read / Write Element border nodes are used: Data Value Reference Read / Write Element - NI It allows you to access the value of a DVR in-place so that you can read, modify, and write a new value to the DVR. While the value is being used in one IPE Structure, no other IPE Structure can access it (all other IPE Structures that attempt to access the DVR at the same time are blocked). Since a new DVR is created for each instance of `Modbus master`, this ensures that multiple `Modbus master` can execute in parallel (non-blocking) but for each individual `Modbus master`, only one read or write operation can happen at once (blocking). Yes and no. Yes because it is functionally equivalent to a FGV (prevent race conditions when reading/writing the value). No because it is not necessarily global (there may be multiple instances of `Modbus master`, each with its of copy of `mutex`). You can think of it like a FGV that is created for each instance of `Modbus master`. Note, however, that the value of the DVR is never used in your example. It only serves as a synchronization mechanism. In this particular case, the data type of the DVR doesn't actually matter. If you have large datasets (for example an array that takes several MB or GB of memory), it is a very good candidate for a DVR so that memory copies can be avoided while you work on it. Especially in 32-bit, where memory is relatively limited. Since its DVRs are `By Reference`, you don't need to connect `Modbus master out`. It will work even if you split the wire (only the DVR is copied, not the value inside the DVR). In this particular case, yes - if you had a different Semaphore for each instance of `Modbus master`.
  4. Welcome to LavaG. This is a queue: What Is a Queue in LabVIEW? - NI You probably tried to delete the control inside the queue indicator. This does not work because a queue must always have a subtype. As the error message suggests, simply drag a new type on the queue indicator and it will replace the existing one. Alternatively, use the 'Obtain Queue' function on your block diagram to create a new indicator based on the configured input type.
  5. It probably selects all elements before it applies the filter. You can get more insights with the EXPLAIN query: EXPLAIN (sqlite.org) Without the database its difficult to verify the behavior myself. It may be more efficient to query channels from a table than from JSON, especially when the channel names are indexed. That way, SQLite can optimize queries more efficiently. Find attached an example for a database to store each data point individually. Here is a query that will give you all data points for all time stamps: SELECT TimeSeries.Time, Channel.Name, ChannelData.Value FROM TimeSeries INNER JOIN TimeSeriesChannelData ON TimeSeries.Id == TimeSeriesChannelData.TimeSeriesId INNER JOIN ChannelData ON TimeSeriesChannelData.ChannelDataId == ChannelData.Id INNER JOIN Channel ON ChannelData.ChannelId == Channel.Id You can also transpose the table to get channels as columns. Unfortunately, SQLite does not have a built-in function for this so the names are hard-coded (not viable if channel names are dynamic): SELECT TimeSeries.Time, MAX(CASE WHEN Channel.Name = 'Channel 0' THEN ChannelData.Value END) AS 'Channel 0', MAX(CASE WHEN Channel.Name = 'Channel 1' THEN ChannelData.Value END) AS 'Channel 1', MAX(CASE WHEN Channel.Name = 'Channel 2' THEN ChannelData.Value END) AS 'Channel 2' FROM TimeSeries INNER JOIN TimeSeriesChannelData ON TimeSeries.Id == TimeSeriesChannelData.TimeSeriesId INNER JOIN ChannelData ON TimeSeriesChannelData.ChannelDataId == ChannelData.Id INNER JOIN Channel ON ChannelData.ChannelId == Channel.Id GROUP BY TimeSeries.Time If query performance is important, you could perform the down sampling in the producer instead of the consumer (down sample as new data arrives). In this case you trade storage size with query performance. Whichever is more important to you. Probably in a database ๐Ÿคฃ Seriously, though, these kinds of data are stored an processed in large computing facilities that have enough computing power to serve data in a fraction of what a normal computer can do. They probably also use different database systems than SQLite, some of which may be better suited to these kinds of queries. I have seen applications for large time series data on MongoDB, for example. As computing power is limited, it is all about "appearing as if it was very fast". As mentioned before, you can pre-process your data so that the data is readily available. This, of course, requires additional storage space and only works if you know how the data is used. In your case, you could pre-process the data to provide it in chunks of 2000 data points for display on the graph. Store it next to the raw data and have it readily available. There may be ways to optimize your implementation but there is no magic bullet that will make your computer magically compute large datasets in split-seconds on-demand (unless you have the necessary computing power, in which case the magic bullet is called "money"). dbtest.db.sql
  6. Yes. Yes. It will still use the runtime engine, which is part of the installation. Here is a KB article with more information on how to bundle different report classes with your executable: Create a Stand-Alone Application Including Report Generation VIs - NI
  7. I've been using OPC UA SDK for .NET - TRAEGER Docs which requires a developers license (one time or perpetual if you want updates) and is based on .NET They do have a GitHub repo with an OPC UA client example for LabVIEW --> opcuanet-samples/lv/Basic/Client at master ยท Traeger-GmbH/opcuanet-samples (github.com) For my project I implemented the server in .NET and utilize event callbacks and objects to pass data from/to the node manager. This works very well if you are familiar with the common pitfalls of calling external code in LV... Also looked at OPCFoundation/UA-.NETStandard: OPC Unified Architecture .NET Standard (github.com) which is free if you are a member of the OPC Foundation and OPCFoundation/UA-ModelCompiler: ModelCompiler converts XML files into C# and ANSI C (github.com) which is a code generator that turns XML Node Sets into C# / C code. Haven't worked much with it as my current solution works like a charm ๐Ÿคทโ€โ™‚๏ธ Have you considered upgrading? There is also OpenG LabPython Library Toolkit for LabVIEW - Download - VIPM by JKI. IIRC it required a license and I'm not sure if it works with Python 3 and newer versions of LabVIEW.
  8. I agree, without any code it is difficult to explain how your particular VI works. That said, when using DAQmx you can just define the sample rate of your task and request the desired number of samples in order to achieve the desired loop time. Here is an example that reads a voltage from a channel at a rate of 1000 Hz, 500 samples at a time. The read function is blocking, so the loop runs at exactly 500 ms intervals. Timing Example.vi
  9. There is no way to change the scope for elements inside a cluster. You can only hide them on the front panel. Your solution to unbundle the private cluster and bundle the public cluster is the best way to hide internal complexity. If you just want to omit certain parameters without explicitly unbundling/bundling, you could also serialize to and from JSON. Of course, this comes at a performance cost. Note, however, that this only works if the element names are exactly the same. Here is an example using JSONtext Convert Clusters using JSONtext.vi
  10. It is actually much faster on my machine. Here are a few results: @ลukasz Fast solution: ~30 ยตs @cordm Case 1 (really slow): ~403 ยตs Case 2 (good performance and readability): ~54 ยตs -- output is wrong, see below. Case 3 (): ~235 ยตs Case 4 (original solution): ~30 ยตs Case 5 (LV200000_BLASLAPACK.dll): ~14 ยตs Case 6 (LVBLAS.dll:BLASCopyVectorH): ~16 ยตs -- Windows 11, LabVIEW 2020 SP1 (32-bit) This code actually truncates the last value because the length of the source array becomes odd. Here are two possible fixes. The second one is slightly faster for me. 1) Append the final element: ~60 ยตs. (slightly slower than before) 2) Rotate the string before conversion: ~42 ยตs.
  11. Probes on the top-level diagram of the parallel For Loop simply show no debug info unless debugging is enabled. Any subdiagram will do the trick.
  12. No. The image shows how to assign multiple event sources to a single event case in the Event Structure. You would have to create a custom user event and handle it in another event case to be able to do what you describe. It is just not a good solution for your particular usecase.
  13. Yes. One event can be triggered by as many buttons as you want. Technically yes but this is bad design because it would have to go through the UI thread, which is super slow. There are more robust ways to do that. Please take a look at the "Continuous Measurement and Logging" project template that ships with LabVIEW.
  14. You should find the CAN palette under Measurement I/O. That said, NI CAN is only for legacy CAN hardware. For newer hardware, NI XNET is the way to go: NI-XNET CAN, LIN, and FlexRay Platform Overview - NI Edit: Forgot to mention NI Example Finder, which includes several examples on how to use the API (via Help > Find Examples).
  15. Not sure if this is relevant but there appears to be an issue with the file paths when loading the script node. The debugging window is displayed when placing the node on the block diagram (notice the unreadable characters in the file extension): It often results in a crash but when it doesn't you get another debugging window when executing the VI: It also reports the same error code as the one mentioned above: This is running on Windows 11 using LabVIEW 2013 (32-bit) and Python 2.7 (registered in PATH) with the latest version of LabPython from VIPM. msvcrt.dll is available in SysWOW64 and System32 (part of Windows). Not sure what causes the issue but at the very least it doesn't appear to be isolated to Windows Server 2019 ๐Ÿคทโ€โ™‚๏ธ
  16. Your image is broken for me because it resides in your gmail account. Can you attach it directly to your post? Did you add Python 2.7 to the PATH environment variable? Also make sure only one version is added to PATH, otherwise it may lookup the wrong version. This was the reason for me in the past.
  17. Perhaps diagram zoom could be utilized or the change could be displayed in an overlay (or both combined)? For example, there could be an icon to indicate that Nigel has a suggestion: When hovering the icon, it could display the suggested diagram in an overlay from which I can choose to apply them: Suggestions could also be displayed on subdiagram level, depending on the scope of the suggestion. For example, Once applied, the diagram grows to fit the new content.
  18. Welcome to the forums ๐ŸŽ‰ I haven't really thought about it as this was the first time I learned about Nigel (never thought anything like that is even remotely possible in LV). Still, I'm familiar with GitHub Copilot and Visual Studio's IntelliCode, which have great IDE integration. What I'm looking for is not so much an AI that writes my code (because I know how to do that), but one that accelerates my development process by suggesting changes in the context of my code. For example, to predict what I'm going to do and provide hints in the form of grayed-out suggestions I can simply accept by pressing a key (tab-driven-development ๐Ÿ˜‰). Things like: When I place an "open connection" function, it suggests the corresponding "close connection" function. When I place several methods of a class or VISA or DAQmx, it suggests how to order them in a sensible manner (open, read, write, close). When I connect the terminals of a VI, it suggests to connect error wires as well. When I place error terminals it suggests adding an error case structure. Of course, there are more specialized tasks where a smart AI would also be really useful: Creation of driver libraries, just like your example from importing a PDF to generating code. Configuration of CLFNs Beautify/Cleanup my block diagram Suggest changes when upgrading to a newer version of LabVIEW Suggest icons for my VIs ๐Ÿ˜ Derive VI descriptions from code Apply changes to a set of VIs (i.e., renaming them) Point out mistakes in my code (missing cases, unhandled errors, etc.) That's awesome. I hope we can play with it soon
  19. 1:02:00 "... but NIgel can control your physical hardware!" *terminator theme intensifies* Jokes aside, the copilot looks interesting but it needs much better integration to be useful for any of my day-to-day tasks. It needs much faster response time and it will have to show how well it can work with legacy and non-standard code bases that aren't developed with NI's portfolio and vision in mind. Did they mention if and when this can be tried out?
  20. I'm not familiar with network streams but the online help provides an example for your particular scenario (scroll to the very bottom): Specifying Network Stream Endpoint URLs - NI Based on your code this works for me: Working Example.zip Edit: To provide some more explanation: Only one application can create endpoints in the default (empty) context "//localhost/". Incidentally, this is also the default context when you create a writer endpoint by name. For example, writer endpoint "my_writer" is equivalent to "//localhost/my_writer". In your particular example, Application B creates a writer endpoint "ApplicationA_in_writer", which is equivalent to "//localhost/ApplicationA_in_writer". Application C creates a writer endpoint "ApplicationB_in_writer", which is equivalent to "//localhost/ApplicationB_in_writer". And since only one application can use the default context, the error happens. To create endpoints in separate contexts, you must specify the context in the endpoint name. For example, "//localhost:ApplicationB/ApplicationA_in_writer" and "//localhost:ApplicationC/ApplicationB_in_writer".
  21. LogMAN

    ActiveX

    I'm not quite sure if this is what you are looking for but here is an example that works for me: Excel Formula.vi
  22. A union is always sized to its largest member, not the sum of its members. In your case, 4 bytes. You currently provide 8 bytes of memory. Try reducing the size of the union to 4 bytes.
  23. +1 for Unbundle. It it simple, requires less code and you understand immediately that these elements belong to the current class. They are also easier to maintain in case you ever feel the need to change the name or type of an element and work well with In-Place Element Structures in Unbundle-Bundle-Scenarios.
  24. A few month later, this is what Bing Image Creator produces for the same input: Can confirm, wires everywhere...
  25. Not like this Because that is the goal; break down your complex and complicated data types into simple and uncomplicated ones. For configuration data you could maintain the path to the storage location and load the data as needed.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.