It probably selects all elements before it applies the filter. You can get more insights with the EXPLAIN query: EXPLAIN (sqlite.org)
Without the database its difficult to verify the behavior myself.
It may be more efficient to query channels from a table than from JSON, especially when the channel names are indexed. That way, SQLite can optimize queries more efficiently.
Find attached an example for a database to store each data point individually.
Here is a query that will give you all data points for all time stamps:
SELECT TimeSeries.Time, Channel.Name, ChannelData.Value FROM TimeSeries
INNER JOIN TimeSeriesChannelData ON TimeSeries.Id == TimeSeriesChannelData.TimeSeriesId
INNER JOIN ChannelData ON TimeSeriesChannelData.ChannelDataId == ChannelData.Id
INNER JOIN Channel ON ChannelData.ChannelId == Channel.Id
You can also transpose the table to get channels as columns. Unfortunately, SQLite does not have a built-in function for this so the names are hard-coded (not viable if channel names are dynamic):
SELECT
TimeSeries.Time,
MAX(CASE WHEN Channel.Name = 'Channel 0' THEN ChannelData.Value END) AS 'Channel 0',
MAX(CASE WHEN Channel.Name = 'Channel 1' THEN ChannelData.Value END) AS 'Channel 1',
MAX(CASE WHEN Channel.Name = 'Channel 2' THEN ChannelData.Value END) AS 'Channel 2'
FROM TimeSeries
INNER JOIN TimeSeriesChannelData ON TimeSeries.Id == TimeSeriesChannelData.TimeSeriesId
INNER JOIN ChannelData ON TimeSeriesChannelData.ChannelDataId == ChannelData.Id
INNER JOIN Channel ON ChannelData.ChannelId == Channel.Id
GROUP BY TimeSeries.Time
If query performance is important, you could perform the down sampling in the producer instead of the consumer (down sample as new data arrives). In this case you trade storage size with query performance. Whichever is more important to you.
Probably in a database 🤣
Seriously, though, these kinds of data are stored an processed in large computing facilities that have enough computing power to serve data in a fraction of what a normal computer can do. They probably also use different database systems than SQLite, some of which may be better suited to these kinds of queries. I have seen applications for large time series data on MongoDB, for example.
As computing power is limited, it is all about "appearing as if it was very fast". As mentioned before, you can pre-process your data so that the data is readily available. This, of course, requires additional storage space and only works if you know how the data is used. In your case, you could pre-process the data to provide it in chunks of 2000 data points for display on the graph. Store it next to the raw data and have it readily available.
There may be ways to optimize your implementation but there is no magic bullet that will make your computer magically compute large datasets in split-seconds on-demand (unless you have the necessary computing power, in which case the magic bullet is called "money").
dbtest.db.sql