Jump to content

ShaunR

Members
  • Posts

    4,882
  • Joined

  • Days Won

    296

Everything posted by ShaunR

  1. Why would you want a dataflow construct when the language supports it implicitly, then? Unless it is to fix the breaking of that dataflow because of the LVPOOP ideology. However. If you are trying to make a distiction between OOP and OOD. Then I am in agreement since OOP is not required for the latter.
  2. You haven't noticed? It probably has something to do with being one of the 5%.
  3. Have you built a VM with the NI-RT Linux? I had a go but they are using an old-ass version of Yocto.
  4. It doesn't matter. If it's for communication then prepare the listener for your definition (whatever that may be) and then use that and move on to important things about the code-just be consistent. I get fed up being asked to ponder philosophical significances in OOP.
  5. Be aware when benchmarking globals that their access times are heavily dependent on how many instances there are and whether reading and/or writing. They deteriorate rapidly as contention for the resource and copies of the data increases..
  6. Place the polymorphic VI on the diagram and click the label at the bottom. A menu with all the functions will appear which will enable you to choose from all the functions. Copy and paste the VI (or drag with the mouse+CTRL to create a copy) and choose another function as before to create another function.
  7. You don't need to defend yourself on some arbitrary forum. The relationship (or lack therof) between yourself and NI is between you and NI alone and it is up to NI whether they want to challenge or defend their IP. Questioning a companies' integrity in public is highly unprofessional and you do not need to respond.
  8. I have several use cases in mind with the usual software distribution and distributed databases/file systems. A few others too that are closer to what you are describing but they are more a case it could be used but there are probably better ways-I would know more later. Service discovery is a means to an end for DHTs if you consider supplying a key-value pair a service. The difference between Kademlia and Chord is basically how they search and contact providers of specific key-value data with an expectation that someone will but without caring who. If one of each service is expected in a system then I'm not sure what would be gained and there would certainly be much faster ways but if you wanted to spread configuration data amongst all services for fail-over (effectively a distributed database) then maybe. Off the top of my head. You could probably use the routing table from a DHT in some way but it's a big "depends".
  9. Has anyone worked with, implemented or investigated DHTs in LabVIEW? I'm particularly interested in Kademlia/S/Kademlia and Chord so would appreciate any input specifically about those, but any other DHTs that have been played with would be great for discussion.
  10. I'm not sure what you mean by "entirely offline". Javascript libraries can be loaded in any browser off of of the local file system so online servers are just the delivery mechanism for the Javascript code. Dynamically updating the variables in that Javascript has a number of options from Websockets, WebRTP, the local NI Webserver and so on. If push came to shove, you can use the .NET Internet Explorer browser control on the front panel and pretend it is just a graph control.
  11. I investigated some but not plotly. I settled on flot for the Websocket API for LabVIEW and it is the API used for the Dashboard example . Raphael was a close second but it is more generic in drawing vector graphics as opposed to just graphing/plotting so is harder to use. I have mentioned before that I now use browsers exclusively for LabVIEW UIs so if the main attraction is the data sharing aspect of the plotly online service or if you are constrained by it appearing on your FP, then it's probably not what you are looking for. If it is for creating superior graphs then it is perfect.
  12. You are absolutely correct but a non-reentrant VI is the equivalent of a blocking socket as opposed to an asynchronous one and we can argue all day about the pros and cons of that. (It usually ends up as "it depends")
  13. Seems to be an error in the LabVIEW read function rather than an OBEX protocol error (which would have "OBEX Error "). I would expect this sort of error if the server was closing the session before attempting to read. What does the server log say about the connection when you see this error?
  14. Seeing as you have not specified what the error is; I had to mind-meld with your Raspberry PI 3 and it informs me it is just trolling you.
  15. While true, it is fairly easy to mitigate using property nodes to handle groups of controls. The main reason I refuse to use tab controls is because of the difficulties with control scaling and positioning - especially with splitters on the FP and tab itself. Otherwise I have no particular problem with them.
  16. Reading, yes. Writing no. Incidentally. It's the same with INI files etc. You don't need to keep a list in memory or have lookup code for it. Just read and write directly to the the file. Not sure about other platforms, though.
  17. SQlite is single writer, multiple readers as it does table level locking. If drjdpowell had followed through with the enumeration it would (or should) have been in there. The high level API in the SQLite API for LAbVIEW insulates you from the Error 5 hell and "busy" handling (when the DB is locked) that you encounter when trying simultaneous parallel writes with low level APIs. So simultaneous parallel writes is not appropriate usage.....sort of . You can mitigate some of these operational aspects as you are attempting to do but in reality you are now just starting out the process of proving my rule of thumb:
  18. Yes. The actual performance is dependent on a number of factors, two of which are the version of SQLite used and how the binaries are compiled. The figure I quoted is a nominal value for SQLite in general rather than with the binaries I used since the intended message is that TDMS is orders of magnitude more performant for writing in the right circumstances. There is a benchmark included as an example with which you can ascertain the actual performance on your system with arbitrary columns and transaction numbers. You can download it from that page to try it as it is open source and free for non-commercial use. It is a single line SQL query so the complexity for decimation is trivial - unlike handling the data in LabVIEW with or without TDMS. I got bored after 1 million data points but if you do more (that VI is one of the examples), please post the results. Sweet!
  19. Are there bench marking graphs or data you can post? I did a lot with SQLite - some of which you can see under the "Performance" heading. It might be nice for some graphical comparisons. SQLite should blow away TDMS for queries but TDMS should blow away SQLite for Writes. The data logging example in the API (which you can play with) demonstrates decimation and takes about 150ms to retrieve the 500 decimated datapoints when there are 1 million [DBL] data points in total (IIRC).
  20. Hmm. That's not really saying anything and a variant on "use the right tool for the job". How about enumerating these alleged design cases for the OP and comparing the practicalities?. I'll start you off........ TDMS is designed for high speed disk writing and can write to disk at more than 400MB/s with the right disks. SQlite can only manage about 0.5-1MB/s.
  21. But memory [re]allocation is Rule of thumb: If you want to search the data, use SQLite. If you want high speed bulk data written to disk then use TDMS. If you want high speed data that is searchable then use both with just-in-time post processing.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.