Jump to content

ShaunR

Members
  • Posts

    4,871
  • Joined

  • Days Won

    296

Everything posted by ShaunR

  1. Yes. I have looked at the RT Images directory now and it seems fairly straight forward. There seems to be a couple of ways to utilise the deployment framework but the only thing I'm not sure of is the origin of the GUID strings. I haven't looked at the zip package as yet. I get around the elevation by using my own VI that is invoked as the pre-install VI of VI Package Manager. If it fails, it tells the user to run the LabVIEW IDE using the "Run As Administrator". Not perfect but users seem to have no issue with this process and it works on other platforms (like Linux and Mac) too - of course changing the appropriate language (su, kdesu etc). Since I have all the tools, I'm also thinking of SFTP from a LabVIEW menu instead of the NI deployment. Max and I don't really get along and Silverlight brings me out in hives It would be great for the Linux platforms and avoid most if not all all the privilege problems since I would be able to control the interaction . I'm just mulling over whether I want that knowledge and can be bothered to see where it goes
  2. For completeness. Here is how you could have used the flatten function from your first try. python example.zip
  3. You can have your cake and eat it if you wrap them in a VI Macro
  4. I'm using 2009 I'll check the others later. ...a little later... ok for 2014 too (it must be operator error on your machine )
  5. Well. The TCP Write can accept a U8 array so it's a bit weird that the read cannot output to a U8 indicator so simple adapt-to-type on the read primitive would make both of us happy bunnies.. Ahem. "\r\n" (CRLF) instead of EOF (which means nothing as a string in LabVIEW) is the usual ASCII terminator then you can " readlines" from the TCP stream if you want to go this route. You will find there is a CRLF mode on the LabVIEW TCPIP Read primitive for this too. Watch out for localised decimal point on your number to string primitive which may surprise you on other peoples machines (commas vs dots) and be aware that you have fixed the number of decimal places or width. (8) so you have lost resolution. You will also find that it's not very useful as a generic approach since binary data cannot be transmitted so most people revert back to the length byte sooner or later.
  6. I don't? (write shared libraries). This issue was solved a long time ago and there are several methods to determine and correct the endianess of the platform at run-time to present a uniform endianess to the application. I have one in my standard utils library so I don't see this problem - ever. Don't get me started on VISA...lol. I'm still angling for VISA refnums to wire directly to event structures for event driven comms Strings are variants to me so I see no useful progress in making the TCP read strictly typed-it would mean I have another type straight-jacket to get around and you would find everyone moaning that they want a variant output anyway.
  7. While I accept your point of order, how LabVIEW represents numerics under the hood is moot. Whenever you interact with numerics in LabVIEW as bytes or bits it is always big endian (only one or two primitives allow different representations). Whether that be writing to a file, flattening/casting, split/combine number or TCPIP/serial et. al. As a pure programmer you are correct and as an applied programmer, I don't care
  8. The first example you posted.............. There are size headers already. The flatten function adds a size for arrays and strings unless the primitive flag is FALSE. The thing that may be confusing is that the server reverses the string and therefore the byte array before sending. Not sure why they would do that but if the intent was to turn it into a little endian array of bytes then it is a bug. I don't see you catering for that in the python script and since the extraction of the msg length (4 bytes) is correct then python is expecting big endian when it unpacks bytes using struct.unpack but the numeric bytes are reversed. The second example has reversed the bytes for the length and appended (rather then prepended) it to the message. I think this is why you have said 100 is enough since it's pretty unusable if you have to receive the entire message of an unknown length in order to know it's length . If you go back to the first example. Remove the reverse string, set the "prepend size" flag to FALSE and then sort out the endianess you will be good to go. The flatten primitive even has an option for big endian, network byte order and little endian so you can match the endianess to python before you transmit (don't forget to put a note about that in your script and the LabVIEW code for 6 months time when you forget all this ) If you need to change the endianess of the length bytes then you will have to use the "Split Number" and "Join Number" functions if you are not going to cater for it in the python script. All LabVIEW numerics are big endian.
  9. ShaunR

    Turn Key DAS

    Moves like that are almost never due to technical capabilities. They are either political or the sales engineer has worked hard for a long time and negotiated some enormous discounts and concessions to break NI lock-in. Have there been any changes to the decision-making management recently? Say. An ex Siemens employee?
  10. Maybe you should post your code so we can see the issue. Ping Example.vi
  11. Many people use Rolfs oglib_pipe but the real solution is to come into the 21st century and stop writing CLIs.
  12. Not really a LabVIEW thing. You can manage Windows devices using the SetupDI API and you would call them using the CLFN. If that didn't do it for you then you'd be back down in the IOCTL where there be monsters.
  13. Pharlap is a walk in the park VxWorks was the one that made me age 100 years I actually have some source with the relevant changes but never had a device.
  14. That's nothing to sniff at. At some point you just have to say "this is the wrong way to approach this problem". JSON isn't a high performance noSQL database - it's just a text format and one designed for a non-threaded, interpreted scripting language (so performance was never on the agenda . )
  15. 300MB/sec? If you want bigger JSON streams then the bitcoin order books are usually a few MB
  16. I don't use any of them for this sort of thing. They introduced the JSON extension as a build option in SQLite so it just goes straight in (raw) to an SQLite database column and you can query the entries with SQL just as if it was a table. It's a far superior option (IMO) to anything in LabVIEW for retrieving including the native one. I did write a quick JSON exporter in my API to create JSON from a query as the corollary (along the lines of the existing export to CSV) but since no-one is investing in the deveopment anymore, I'm pretty "meh" about adding new features even though I have a truck-load of prototypes. (And yes. I figuratively wanted to kiss Ton when he wrote the pretty print )
  17. I was initially manipulating the string but then you demonstrated the recursive approach with objects for encoding which was more elegant and removed all the dodgy string logic to handle the hierarchy. Once I found that classes just didn't cut it for performance (as per usual) I went back and solved the same problem with queues. The fundamental difference in my initial approach was that the retrieval type was chosen by the polymorphic instance that the developer chose (it ignored the implicit type in the JSON data). That was fast but getting a key/value table was ugly. Since all key/value pairs were strings internally the objects made it easier to get the key/value pairs into a lookup table. Pushing and popping queues were much faster and more efficient at that, though, and didn't require large amounts of contiguous memory.
  18. Yes. I now use the SQLite JSON capabilities for in-application uses but I also have my own parser for Websockets and comms. The other JSON library was just too slow for streaming Websockets and the NI primitive is as much use as a chocolate fireguard because it crashes out if anything isn't quite right. (which I've raged about before). If you want to see this sort of use case then take a look at blockchain.info for the real-time transactions. I went back to my original ones that I showed in the original thread and developed those further by having a format case for each and every type and used queues for the nesting (kept the polymorphic reads the same as the original). It is acceptably slower than the native one and orders of magnitude faster than the other library on large data (much the same for small snippets) although it isn't as good with all the different encoding which it just hand-waves to a string.
  19. Quite a few NAS boxes do this nowadays automatically with smart folders. As I prescribe to the "if it aint broke, don't fix it" school of laziness I would probably tell IT to get one or make them offer me the service and leave my software alone
  20. Note: That is the BSD-2 Clause licence. There is also the BSD-3 Clause liicence which adds a clause disavowing the use of the providers' name for promotional and/or endorsement of derivative works.
  21. Lots of caveats and manual optimisation (loop unrolling). Not all MS functions are suitable. Ability to meet RT deadlines you require is suspect. Risk analysis would probably yield "don't touch this with a barge pole". If push came to shove then maybe some things might be possible with the node but the heavy lifting would probably need to be offloaded to meet spec. You would probably find the program that was written for the node isn't compartmentable to be able to offload defined chunks without complete refactoring of the script code.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.