Jump to content

smithd

Members
  • Content Count

    755
  • Joined

  • Last visited

  • Days Won

    42

Posts posted by smithd


  1. 1 hour ago, Rolf Kalbermatter said:

    They are mostly meant for FPGA, where variable sized elements have a very high overhead to be implemented.

    could be valuable on rt if enforced. have to do this yourself with byte arrays and replace subset.


  2. A better way of looking at it might be that a variant is a specific data type.

    so if we assume c++ has some map<keyType, valueType> under the hood, the new labview map type exposes this directly and lets keyType and valueType be any labview type.

    the variant attribute system was specifically map<string, variant>. The api added some sugar on top so that if you specified a type, it performed variantToData(map.read(myStringKey), myDataType)...but it was still a map<string, variant>. If the content of your variant is a big array, this could potentially cause performance issues because of that variantToData. Vs 2019 where your map type might be map<string, lvArrayHandle> which doesnt need any of the weird memory allocation that happens if you call variantToData(myArrayInsideOfAVariant, dblArrayType).

    I dont know if this is precisely accurate, but thats what my general understanding was. 

    • Thanks 1

  3. 3 hours ago, drjdpowell said:

    I first got depressed about NXG at one of the CAB sessions at a CLA Summit.  It was on UI improvements with NXG.  More modern UIs is something that could be significantly improved over CG.  Think of all the techniques demonstrated in web pages or smart phones.  At the very least, I was interested in the improved menus NXG would have (icons? Tip strips?).

    I totally forgot about that, but I remember feeling the same way about...version 2? I figured since it used microsoft's ui stuff it would have some of the same features like automatic layouts at the minimum, and was disappointed. I know they've since worked on dynamic control instantiation, which is good, but if we're still stuck with finding window bounds and putting the top left corner of a control in right spot, whats even the point?

    Has anything like that crept into the more recent versions?


  4. VSCode+gitlens has really nice git integration for text, if you don't have access (or dont want) full visual studio.

    I've been trying and trying to get away from sourcetree -- I've tried gitahead and git extensions. I like Git extensions best so far -- is not friendly, but it does a good job of exposing all the little random git things whereas a lot of tools (rightfully) try to make git look like its easy. I do want to try kraken now.

    Edit: do you know what gitkraken's license rules are? I thought the old rules were that its only free for open source, but I can't actually find any license terms. It says you can only access "public" repos, but when I just installed it I could use it just fine with a local-to-my-computer repo. So I'm confused.


  5. 14 hours ago, Aniket Gadekar said:

    I have tested this toolkit for memory & performance, which is much faster than CVT & Tag Bus Library

    So I took a quick peek, nothing too detailed, but from what I saw there is pretty much no way this is unequivocally faster than the CVT or the tag bus. With the CVT it might be possible this approach is faster if you give it the worst possible load (10000000 simultaneous readers, each of which is reading a random, constantly changing name), but in any sane case you'd lookup a reference beforehand and then write to those, and the write time is bounded at basically the same performance as any reference-based data access. For tag bus...its literally just a cluster with an obscene number of different data types in big arrays. Data access is just indexing an array. There is no way in labview for data access to be faster than indexing an array. In contrast you are obtaining a queue by name which involves several locks, doing a lookup, and writing to the queue which requires another lock. The CVT only needs 1 lock and tag bus requires zero. Memory I'll give you.

    Its also worth you looking at this:

     


  6. You could also use a color box indicator behind a transparent control. Color box indicators dont need the UI thread to update, I dont think.

    to answer your question, I'm coming from an industrial control background so I'd say B. you dont need to change color faster than say 200-300 ms unless color is critical to your application, so a consistent polled update seems much easier to implement and is more stable of course


  7. theres no such thing as determinism over standard ethernet...

    I believe cdaq will automatically pack the digital lines pretty effectively, but you can just use task manager (on win10) to see the network traffic, else use resource monitor. I wouldn't suppose there is any issue with the network. In fact, I wouldn't even expect that function to be blocking for an ethernet device, but maybe it is. Does anything else happen in task manager when you see the blip? Does it still happen if directly connected?


  8. fair enough, but I guess as a bottom level statement I (perhaps misguidedly) trust windows to do a better job of cleaning up failed processes than labview to clean up dead clones. This is especially true if the workers are doing anything that isn't completely core to labview -- for example, calling 3rd party dlls (or imaq).


  9. Whining mode: I know they are your and Jeff's baby but the last time I had a small (1 VI doing some pipelined processing) greenfield opportunity to try them out they were pretty cool, seemed to work well in the dev environment (if a bit slow to do all the scripting steps) and I was excited to be using them...and then app builder happened. Quickly gave up trying to make my single VI build, tore out channels and replaced it with your other baby, and it built right away. Its unfortunate because the concept seemed to work as nicely as claimed, but...app builder. App builder truly ruins all good things 😢

    On 1/28/2020 at 7:28 PM, rharmon@sandia.gov said:

    I intend these clones to run for an extended periods of time months to maybe years.

    Helpful mode: This is a key line -- I wouldn't necessarily trust myself to write an application to run standalone for that long without any sort of memory bloat/leak or crash. My personal recommendation would actually be to spawn them up as separate processes. You can build an exe which allows multiple instances and pass parameters (like a tcp/ip port) to it. If one fails you can restart it using dotnet events or methods (or presumably win32 if you, like shaun, have an undying hatred of dotnet). You can also use a tool like NSSM to automatically restart the supervisor process (assuming windows doesn't break in this time period)


  10. Eh, I'm totally fine with security experts making it so I can't shoot myself in the foot. My main goal for my secure networking layer is to give me the most reasonably secure connection to another device without me having to know. They do have some protocols available but disabled by default (I think this includes SSL3).

    FIPS, as I understand it, is less about the algorithms and more about certification of a specific version of an implementation of the algorithm, which is then ossified and never changes even for security fixes. So to their point, its about checking a box, not providing security.


  11. On 1/18/2020 at 3:17 AM, ShaunR said:

    Although it may be easier from the User end, it's still fundamentally a port of OpenSSL but without FIPS support. LibreSSL doesn't support TLS1.3, currently, and according to their Git it's sitting at OpenSSL 1.0.1 so it will be a while before it has TLS1.3.

    Some would say avoiding FIPS is a feature :P

    And yeah, it will be a while for 1.3 -- they refused to do anything until it was actually standardized and now they are working on it. Seems like their attitude is "1.2 isn't broken yet" which makes sense. Their focus from the start was to fork openssl and clean it all up, which they seem to be making good progress on. I wouldn't have expected any correlation to the openssl version numbers at this point.


  12. libressl seems to have a better focus on stability, plus their api is much better.

    https://en.wikipedia.org/wiki/LibreSSL

    When I wrote my little tls library i wanted to avoid the dll issue so what I did was use the callback variant of the api here and here and just used the built-in labview tcp api. The callbacks are run when the library needs more data to enter or leave, so I just made the callbacks write to a queue which I then pull from to feed the labview primitives. Its a much much much nicer api than all that openssl stuff. Openssl docs make my soul cry.


  13. On 1/1/2020 at 1:36 PM, ShaunR said:

    Indeed. It does have some useful features though like congestion control targeted at small packets and discovery.

    Seems to be more performant too.

    Its important not to miss the details on that performance one. They went for the specific use case of very very underpowered devices, very infrequent sending of data, etc. For example they assume a new TCP connection for every data packet, unless I misread*, while on the DTLS/coap side they ignore security handshaking, assuming you have a factory installed pre-shared key instead. It looks like it also ignores the fact that CoAP uses a rest model, meaning a request-response cycle. If you include that information they are basically saying "UDP protocols with no reliability except a super super barebones re-transmit feature work better than TCP if you close and reopen the connection once per second"..which...yeah.

     

     

    *

    Quote

    For TCP, the session is terminated after each sensor report was received in the other end of the connection

     


  14. 13 hours ago, X___ said:

    No interest in NXG from this neck of the woods would be the feedback to the Powers that Were.

    Lol. Sorry for sidetracking, but I tried to get it to even open and it crashes on my machine. If you believe a KB, at some point someone before me installed a beta version of nxg on what is now my laptop (seems unlikely). The fix is to uninstall nxg, uninstall ni package manager, "Remove any supporting NXG files in the root directory, removing any trace of LabVIEW NXG from the machine" and reinstall everything. I asked NI support what "the root directory" is I'm supposed to delete NXG stuff from and they told me it meant the C drive 😕. Find an unspecified leftover NXG file somewhere on the C drive and delete it. What a joke.

    11 hours ago, ShaunR said:

    Hopefully TLS1.3 :D

    Lol

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.