Jump to content

All Activity

This stream auto-updates

  1. Yesterday
  2. Yikes, the Gpower IOLink toolkit looks great but the cost is prohibitive. For $1000 I was inclined to give it a go but I would never saddle up to such a cost on a subscription basis and as you said they are also requiring a annual runtime license. What a disaster that would be for any of my customers when their critical systems stop working on Jan 1st. until they pony up their subscription fees. I think like you I will just dip my toes in the RESTful JSON waters and then see what it will take to generalize it.
  3. We've done a couple one-off IO-Link implementations, but it was by no means an all-encompassing tool that would work with the general standard. We would pay for a development license, but the deployment license costs from the one LV IOLink toolkit vendor were untenable as we have hundreds of systems. We'd be very interested in something like what you described. Please post updates! Is your intention to open source it or to commercialize it?
  4. Now that we are NI/Emerson I would expect to see more connectivity in the automation sector. It seems that IOLink is a dominate communications standard that would be a good candidate for support in LabVIEW. IFM is one company that makes a ton of really cool sensors and networking hubs that are largely IOLink capable. It should be relatively straightforward to develop a restful IOLink interface using GET/POST with JSON formatted commands. I am planning on doing this and was wondering if anyone has gone down this path before. https://www.ifm.com/us/en/shared/technologies/io-link/select-products/product-configuration-pages/dataline-tee-cable-wiring
  5. Note the WITHOUT ROWID keyword also, as that could make a significant performance improvement with this kind of table.
  6. Great, this works with my first method where I wasn't storing in JSON (similar to how LogMAN said, just in a single table) and makes a huge difference. Didn't know a primary key could be like that. This really helped thanks!
  7. I had considered this, but was worried about losing any important points between those decimations. I should have included in the original post, but the plan is to give the user a choice between average and min/max so any outliers can be viewed. Doing a table per channel is an interesting idea. Will keep that in mind. Thank you for sharing the benchmarking.
  8. This was what I was originally planning, but when I found out about down sampling in SQLite I wanted to see if I could avoid the duplicate data. I think a combination of down sampling and multiple tables will result in the best overall performance. Shame getting the money can't be provided with another magic bullet... where did I put my magic gun? # Thank you for the overall table schema, I was thinking about reducing its size by using a channel index/name table as well, so thanks for the heads up on that one.
  9. From Ni Max I do not receive a response from the Power Supply, it is communicated by Ni Visa by COM, it is in Com 4, by Ethernet it cannot be communicated by the location and the teams of the place where it is located.
  10. Last week
  11. A non-JSON option you could try is: CREATE TABLE TestData ( Channel, Time, Data, -- individual reading at Time for Channel PRIMARY KEY (Channel,Time) ) WITHOUT ROWID This is every reading sorted by a Primary Key that is Channel+Time. This makes looking up a specific channel in a specific Time Range fast. BTW, you don't need to make an index on a Primary Key; there is already an implicit index . You would select using something like: SELECT (Time/60)*60, Avg(Data) FROM TestData WHERE Channel=? AND TIME BETWEEN ? AND 1717606846 GROUP BY Time/60
  12. Looking for a senior LabVIEW/TestStand software engineer to join the team working on test systems for module and space vehicle test: Sr. Engineer - Automation & Integration - LabVIEW Design, build, and maintain efficient, reusable, and reliable frameworks and test scripts Automate test cases and integrate them into the software deployment pipeline Improve test coverage by identifying untested code areas and developing appropriate unit and system tests to cover them Perform individual and group code reviews to ensure the quality and functionality of automated tests Analyze and enhance performance testing strategies for complex systems Provide guidance and mentorship to junior engineers and team members Collaborate with manufacturing and quality assurance teams to identify opportunities for automation and process improvements Work with hardware engineers and software developers to define system requirements and ensure seamless integration of automated solutions Create software verification matrices to show system test requirements coverage Conduct feasibility studies and provide recommendations on the implementation of automation technologies to support high-volume production ...and more! It's a fun team with some very talented developers across all of the EGSE we use in production - lots of interchangeable hardware platforms/test racks, HALs, integrating code from other languages (python is an advantage), OIs, databases, ERP control, environmental chamber automation for long tests... We're expanding our production lines, so there's the opportunity to make a real difference in how we scale. Come join us!
  13. For sure - there are companies out there that have LabVIEW teams with multiple developers: they're good fun and you learn from the people around you.
  14. I heard that when LabVIEW 6i was released
  15. Also, please post the VI - the issue might be in the code, not the front panel.
  16. You want a table per channel. If you want to decimate, then use something like (rowid %% %d == 0) where %d is the decimation number of points. The graph display will do bilinear averaging if it's more than the number of pixels it can show so don't bother with that unless you want a specific type of post analysis. Be aware of aliasing though. The above is a section of code from the following example. You are basically doing a variation of it. It selects a range and displays Decimation number of points from that range but range selection is obtained by zooming on the graph rather than a slider. The query update rate is approximately 100ms and it doesn't change much for even a few million data points in the DB. It was a few versions ago but I did do some benchmarking of SQLite. So to give you some idea of what effects performance:
  17. Welcome to LavaG. This is a queue: What Is a Queue in LabVIEW? - NI You probably tried to delete the control inside the queue indicator. This does not work because a queue must always have a subtype. As the error message suggests, simply drag a new type on the queue indicator and it will replace the existing one. Alternatively, use the 'Obtain Queue' function on your block diagram to create a new indicator based on the configured input type.
  18. It probably selects all elements before it applies the filter. You can get more insights with the EXPLAIN query: EXPLAIN (sqlite.org) Without the database its difficult to verify the behavior myself. It may be more efficient to query channels from a table than from JSON, especially when the channel names are indexed. That way, SQLite can optimize queries more efficiently. Find attached an example for a database to store each data point individually. Here is a query that will give you all data points for all time stamps: SELECT TimeSeries.Time, Channel.Name, ChannelData.Value FROM TimeSeries INNER JOIN TimeSeriesChannelData ON TimeSeries.Id == TimeSeriesChannelData.TimeSeriesId INNER JOIN ChannelData ON TimeSeriesChannelData.ChannelDataId == ChannelData.Id INNER JOIN Channel ON ChannelData.ChannelId == Channel.Id You can also transpose the table to get channels as columns. Unfortunately, SQLite does not have a built-in function for this so the names are hard-coded (not viable if channel names are dynamic): SELECT TimeSeries.Time, MAX(CASE WHEN Channel.Name = 'Channel 0' THEN ChannelData.Value END) AS 'Channel 0', MAX(CASE WHEN Channel.Name = 'Channel 1' THEN ChannelData.Value END) AS 'Channel 1', MAX(CASE WHEN Channel.Name = 'Channel 2' THEN ChannelData.Value END) AS 'Channel 2' FROM TimeSeries INNER JOIN TimeSeriesChannelData ON TimeSeries.Id == TimeSeriesChannelData.TimeSeriesId INNER JOIN ChannelData ON TimeSeriesChannelData.ChannelDataId == ChannelData.Id INNER JOIN Channel ON ChannelData.ChannelId == Channel.Id GROUP BY TimeSeries.Time If query performance is important, you could perform the down sampling in the producer instead of the consumer (down sample as new data arrives). In this case you trade storage size with query performance. Whichever is more important to you. Probably in a database 🤣 Seriously, though, these kinds of data are stored an processed in large computing facilities that have enough computing power to serve data in a fraction of what a normal computer can do. They probably also use different database systems than SQLite, some of which may be better suited to these kinds of queries. I have seen applications for large time series data on MongoDB, for example. As computing power is limited, it is all about "appearing as if it was very fast". As mentioned before, you can pre-process your data so that the data is readily available. This, of course, requires additional storage space and only works if you know how the data is used. In your case, you could pre-process the data to provide it in chunks of 2000 data points for display on the graph. Store it next to the raw data and have it readily available. There may be ways to optimize your implementation but there is no magic bullet that will make your computer magically compute large datasets in split-seconds on-demand (unless you have the necessary computing power, in which case the magic bullet is called "money"). dbtest.db.sql
  19. What is this function and why is this error popping up ??
  20. Do you get a reply to *IDN? in NI MAX? Do you send a CH n command? Does it work when you use the Ethernet interface?
  21. Hello, I have this power supply and I need to both write and read data from it, I need to communicate with labview 2013. It communicates serially via USB-B but it always gives me this error and it never communicates at all. Does anyone know what's happening?
  22. Good morning, Firstly, thanks James for creating the excellent SQLite and JSONText libraries. I've been working on a proof of concept for storing time series data into an SQLite database. As it stands I've decided upon a simple table, with two columns, Unix time (as my PK) and Data as a JSON string with an unknown number of channels e.g; The schema is; CREATE TABLE [Data_1s]( [Time] PRIMARY KEY, [Data]); CREATE INDEX [idx_time] ON [Data_1s]([Time]); Currently my example dataset is 50 channels at 1Hz, for 1 day. I'm aiming to test this for a years worth of data. Point Values I'm getting very good performance when extracting an individual time row (e.g. via a slider between the start and end time of the data set); The current query I'm using is based on an older form of storing the time (as a decimal) so I searched for a 1s period; SELECT Time, Data FROM Data_1s WHERE Time BETWEEN 1717606845 AND 1717606846 I then collect the results, extract the individual channels from the JSON data and pop it into a map; This of course can be optimised, but given it is sub 5ms, it is plenty quick enough for interacting with the data via a HMI. Graph Anyway, when it comes to extracting XY data to display on a graph, I use the following to for example only extract Channel 0; SELECT Time, json_extract(data,'$."Channel 0"') AS Channel_0 FROM Data_1s WHERE Time BETWEEN 1717656137 AND 1718860565 In the above example I read 1892 elements from the database, and it takes ~ 19ms. Fast enough for the user to drag a slider around and change the time window. However, if I go for my full example data window, e.g; SELECT Time, json_extract(data,'$."Channel 0"') AS Channel_0 FROM Data_1s WHERE Time BETWEEN 1717571628 AND 1718860565 It takes 852ms to read 86400 elements. If I go for all 50 channels it increases to 8473ms. Now with a graph with of a 2000 odd pixels, there isn't much point in loading all that data into LabVIEW, so I implemented an average down sampling query, based on an interval size; WITH TimeGroups AS ( SELECT (Time/60) *60 AS TimeGroup, json_extract(data,'$."Channel 0"') AS Channel_0 FROM Data_1s WHERE Time BETWEEN 1717571628 AND 1718860565 ) SELECT TimeGroup, AVG(Channel_0) AS Avg_Channel_0 FROM TimeGroups GROUP BY TimeGroup; This takes 1535ms to run, returns 1441 elements. This is worse than reading the 86400 elements and letting LabVIEW manage the down sampling. # The questions I broadly have are; Am I totally off base with the schema and using JSON to store the (unknown) amount of channels? Should I be looking at a different way of down sampling? How are large datasets like stock prices stored and distributed to display on the internet? Some must have very long time series data spanning over decades! How do you store and quickly extract datasets for display? Although I feel my questions are pretty broad rather than code specific, I can package it up and share after I strip a bunch of bloat out as it still very much a POC if that would help. Thanks Peter
  23. We could allocate/resize the array but it is highly complicated. There are two basic possibilities: 1) Using NumericArrayResize() is possible but you need to calculate the byte size yourself. With complex datatypes (clusters) the actual byte size can depend on the bitness of your compilation and contain extra alignment (filler) bytes for non Windows 32-bit compilation. Really gets complicated, but the advantage is that it is at least documented. 2) There is an undocumented SetArraySize() function. It can work for arbitrary array elements including clusters and accounts for the platform specific alignment but is tricky since the datatype description for the array element is a LabVIEW type-descriptor. To get that right is pretty much as complicated as trying to calculate the array element size yourself and as it is undocumented you risk that something might suddenly change. The declaration for that function is: TH_REENTRANT MgErr _FUNCC SetArraySize(int16 **tdp, int32 off, int32 dims, UHandle *p, int32 size); tdp is the 16-bit LabVIEW type descriptor for the array element data type. This is basically the same thing that you get from Flatten Variant but you want to normally make sure that it does not contain any element labels as it does not need them and only makes the parsing slower. off is usually 0 as it allows to specify an offset into a more complex tdp array. The other parameters are exactly the same as for NumericArrayResize(). In fact NumericArrayResize() is a thin wrapper around this function that uses predefined tdp's depending on the first parameter of it.
  24. Thank you @Rolf Kalbermatter. Your suggestion works as expected, and the code is much simpler than in the previous version. As I understand it, performing the opposite operation would not be easy since we can't resize the array. I wonder if there is any other function to allocate memory on the C side using LabVIEW, or if `NumericArrayResize` from extcode.h is the only option?
  1. Load more activity
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.