-
Posts
706 -
Joined
-
Last visited
-
Days Won
79
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by LogMAN
-
Thanks for keeping up the fight! I don't even want to imagine your inbox on a Monday...
-
There is an option in your profile settings to disable individual blocks on the profile page. Perhaps it was disabled on the accounts you checked. Yeah that's a good idea. It appears that some visitors/guests find their ways through search engines, but if anyone is asking questions there is just no way to see them through all the spam. I'd be happy to help block spammers but the way it is now just isn't sustainable and makes the site literally unusable. I hope Michael intervenes soon while it is still manageable.
-
-
While that may be true, the current spam posts are clearly not using any advanced technologies like that. At the very least, enabling captchas will force them to invest much more processing power to solve them, which either reduces the number of posts they can send, or discourages them from pursuing it any further. In conjunction with IP blocking, this can be very effective against these kinds of script kiddies. Second that!
-
Since user creation is disabled anyways, this should sufficiently prevent any spam posts from being displayed to regular users. Although, it probably will flood the moderator inbox. Perhaps CAPTCHA could be a viable option? Especially for newly created posts like the ones currently being spammed. Not sure if there are options to require CAPTCHA for new users and relax the requirement for users of higher ranks, but that could be something worth looking into.
-
Thanks again for taking care of the mess! Is it easier if we report every spam account or does it only put more spam in your inbox?
-
-
It appears that, although registration is disabled, social sign-in is still possible and allows for account creation (even if registration is disabled). For example, Twitter - Social Sign In - Invision Community. Maybe turning those off will put us in a walled garden 🤔
-
Why doesnt TCP listen listen to my IP address
LogMAN replied to govindsankarmr's topic in LabVIEW General
Did you unblock the port in your firewall settings to allow inbound connections from the client? -
This was also mentioned in their blog post when they discontinued LabVIEW NXG: Our Commitment to LabVIEW as we Expand our Software Portfolio - NI Community
-
-
I can CREATE a snippet, I just can't USE a snippet
LogMAN replied to Phillip Brooks's topic in LAVA Lounge
I actually never bothered to click on that and simply assumed that it does the same as selecting the image 😅 Yes, this has been known for a long time. You wouldn't believe who reported it first 😄 -
Thanks for cleaning up the mess. Let us know if we can help. In the meantime we'll provide moral support in the forms of memes
-
-
They probably went to bed for a few hours, now they are back. Is CAPTCHA an option for posting new messages?
-
I can CREATE a snippet, I just can't USE a snippet
LogMAN replied to Phillip Brooks's topic in LAVA Lounge
I believe you got it backwards. Run LabVIEW as admin and try to import a snippet. It's a no no... Run LabVIEW as normal user and snippets work just fine. That's a neat little trick, thanks for sharing! -
I can CREATE a snippet, I just can't USE a snippet
LogMAN replied to Phillip Brooks's topic in LAVA Lounge
-
@ensegre is right. Use the profiler to visualize buffer allocations. It shows black dots where a copy occurs.
-
Yes that works. It creates a copy of the unbundled value (requires more memory). The IPE Structure avoids this copy by overwriting the original value. This is also explained in the docs: Unbundle / Bundle Elements - NI Please note that my example is simple enough that the compiler can probably optimize it on its own. Here is another example that the compiler cannot optimize on its own because of the Select:
-
The In-Place Element (IPE) Structure can be used for memory optimization purposes. It is most useful for large datasets. For example, to modify the value of a cluster in-place (hence the name). Here is a simple example: This is functionally equivalent to using `Unbundle By Name` followed by `Bundle By Name` but it allows the compiler to avoid a memory copy for the OK value and increment it in-place. Note that there are different kinds of border nodes that you can use on the IPE Structure: In Place Element Structure - NI In your example, the Data Value Reference Read / Write Element border nodes are used: Data Value Reference Read / Write Element - NI It allows you to access the value of a DVR in-place so that you can read, modify, and write a new value to the DVR. While the value is being used in one IPE Structure, no other IPE Structure can access it (all other IPE Structures that attempt to access the DVR at the same time are blocked). Since a new DVR is created for each instance of `Modbus master`, this ensures that multiple `Modbus master` can execute in parallel (non-blocking) but for each individual `Modbus master`, only one read or write operation can happen at once (blocking). Yes and no. Yes because it is functionally equivalent to a FGV (prevent race conditions when reading/writing the value). No because it is not necessarily global (there may be multiple instances of `Modbus master`, each with its of copy of `mutex`). You can think of it like a FGV that is created for each instance of `Modbus master`. Note, however, that the value of the DVR is never used in your example. It only serves as a synchronization mechanism. In this particular case, the data type of the DVR doesn't actually matter. If you have large datasets (for example an array that takes several MB or GB of memory), it is a very good candidate for a DVR so that memory copies can be avoided while you work on it. Especially in 32-bit, where memory is relatively limited. Since its DVRs are `By Reference`, you don't need to connect `Modbus master out`. It will work even if you split the wire (only the DVR is copied, not the value inside the DVR). In this particular case, yes - if you had a different Semaphore for each instance of `Modbus master`.
-
Welcome to LavaG. This is a queue: What Is a Queue in LabVIEW? - NI You probably tried to delete the control inside the queue indicator. This does not work because a queue must always have a subtype. As the error message suggests, simply drag a new type on the queue indicator and it will replace the existing one. Alternatively, use the 'Obtain Queue' function on your block diagram to create a new indicator based on the configured input type.
-
It probably selects all elements before it applies the filter. You can get more insights with the EXPLAIN query: EXPLAIN (sqlite.org) Without the database its difficult to verify the behavior myself. It may be more efficient to query channels from a table than from JSON, especially when the channel names are indexed. That way, SQLite can optimize queries more efficiently. Find attached an example for a database to store each data point individually. Here is a query that will give you all data points for all time stamps: SELECT TimeSeries.Time, Channel.Name, ChannelData.Value FROM TimeSeries INNER JOIN TimeSeriesChannelData ON TimeSeries.Id == TimeSeriesChannelData.TimeSeriesId INNER JOIN ChannelData ON TimeSeriesChannelData.ChannelDataId == ChannelData.Id INNER JOIN Channel ON ChannelData.ChannelId == Channel.Id You can also transpose the table to get channels as columns. Unfortunately, SQLite does not have a built-in function for this so the names are hard-coded (not viable if channel names are dynamic): SELECT TimeSeries.Time, MAX(CASE WHEN Channel.Name = 'Channel 0' THEN ChannelData.Value END) AS 'Channel 0', MAX(CASE WHEN Channel.Name = 'Channel 1' THEN ChannelData.Value END) AS 'Channel 1', MAX(CASE WHEN Channel.Name = 'Channel 2' THEN ChannelData.Value END) AS 'Channel 2' FROM TimeSeries INNER JOIN TimeSeriesChannelData ON TimeSeries.Id == TimeSeriesChannelData.TimeSeriesId INNER JOIN ChannelData ON TimeSeriesChannelData.ChannelDataId == ChannelData.Id INNER JOIN Channel ON ChannelData.ChannelId == Channel.Id GROUP BY TimeSeries.Time If query performance is important, you could perform the down sampling in the producer instead of the consumer (down sample as new data arrives). In this case you trade storage size with query performance. Whichever is more important to you. Probably in a database 🤣 Seriously, though, these kinds of data are stored an processed in large computing facilities that have enough computing power to serve data in a fraction of what a normal computer can do. They probably also use different database systems than SQLite, some of which may be better suited to these kinds of queries. I have seen applications for large time series data on MongoDB, for example. As computing power is limited, it is all about "appearing as if it was very fast". As mentioned before, you can pre-process your data so that the data is readily available. This, of course, requires additional storage space and only works if you know how the data is used. In your case, you could pre-process the data to provide it in chunks of 2000 data points for display on the graph. Store it next to the raw data and have it readily available. There may be ways to optimize your implementation but there is no magic bullet that will make your computer magically compute large datasets in split-seconds on-demand (unless you have the necessary computing power, in which case the magic bullet is called "money"). dbtest.db.sql