Jump to content

Phillip Brooks

Members
  • Posts

    899
  • Joined

  • Last visited

  • Days Won

    50

Posts posted by Phillip Brooks

  1. It seems to me that the biggest adoption stoppers I saw re: NXG were the availability of NI Toolkits that hadn't been ported.

    If the timescale to complete those big toolkit conversions was also going to impede expansion as a large system-solution provider, management may have asked,

    "If we already have the software and hardware in place today to do that, what does NXG offer long term?"

  2. 18 minutes ago, ShaunR said:

    You can't outsource security :cool: If you understand that all TLS communications are interceptable by governments because of CA's, then you might also be reticent when dealing with some governments.

    I completely understand and agree, it just seems a bit ironic that we can't trust certificate authorities; that is there reason for existing?

    My company has a zero touch provisioning solution for deploying our hardware on public networks. I load our top level cert during test and then I'm done. This is done on a wired private LAN using SCP.

    Throw in telephony requirements like Lawful Interception and it is amazing that these devices work at all... 

     

  3. Thanks all,

    I've loaded and started playing with Synkron; not enough experience with it yet to decide if I will stay with it.

    Acronis is probably the best bet, but this is my work laptop and I don't want to spend my own money to have the same security I've had in the past.

    I had my laptop stolen about three years ago while travelling. When we initially added Carbonite to all systems, they told us it was full disk backups. The IT dept told me after the loss that they had purposely changed Carbonite to back up only the user folders to reduce the storage costs. My user.lib and instr.lib were lost and I had to dig up some very old manual copies and rework them all. I was able to modify the Carbonite settings on my replacement laptop and occasionally check that I still have a full backup available. So far so good, at least until we discontinue Carbonite for the M$ solution.

    Everyone is pushing cloud-based services backups with subscriptions, but I did find after digging around on the Acronis site that they still sell a perpetual one-time license as well.

  4. My company has decided to save money and switch from full Carbonite backups of our systems to some sort of M$ Office Cloud feature that only backs up our documents under our users.

    Apparently there is no option to support anything outside of the /Users folders

    I've decided to connect a USB-C 3.2 drive to my Dell dock and use that as a local backup device.

    Windows backup sucks. Anyone have a good suggestion for a reliable incremental backup utility?

  5. I had a case years ago where I needed to create a solution that would receive three UDP streams over three distinct Ethernet wires on a single LabVIEW station at 100Hz.

    I used queues to receive the data and used VI server to start three processing loops that read from each queue. It worked fine.

    The second requirement was to optionally receive a single stream at 1200 Hz. There was no hardware at the time to send the data to my application. I struggled for a while, and finally used a NI-6509 DIO board in a second computer that was connected to a function generator. I used a timed loop that was triggered by an input to the 6509. I interpolated a 16 hour 100 Hz log to create a 1200 Hz data set. I used the function generator with a square wave to trigger a timed loop that sent the UDP data. My application logged the high-speed data with no data drops; the only thing I needed to do was increase the UDP receive buffer size on the application side.

    You can definitely get there, but will need some timed hardware resources.    

  6. On 2/4/2020 at 4:28 AM, drjdpowell said:

    Unfortunately this contrasts with the current behaviour that null-->NaN for a floating-point number, rather than being the default number input.  In standard JSON, the float values NaN, Infinity and -Infinity have to become null, and to convert them back to a default value doesn't make sense.  We could add an option to "ignore null items" which would treat nulls as equivalent to that item not existing. 

    The native LabVIEW Unflatten From JSON Function might be of use.

    I used a Salesforce REST API that returned JSON. There were numerous string and numeric fields that returned as NULL.

    If you use the native LV function and change 'default null elements?(F)' to true and set 'enable LabVIEW extensions? (T)' to false, you can get the default values assigned in your input type & defaults cluster.

    https://zone.ni.com/reference/en-XX/help/371361R-01/glang/unflatten_from_json/

     

  7. On 9/5/2019 at 11:41 PM, Cat said:

    We could try running iperf, just to confirm it's the loopback adapter and not the code.  But as I said, we've run the same code with 2 hardware NICs (10G) connected on the same computer and it works fine.

    Many higher-end NICs offer a fetaure called a TCP Offload Engine. These NICs perform much of the processing that takes place on the CPU.

     

    That may be the difference here.

     

    https://en.wikipedia.org/wiki/TCP_offload_engine

  8. I thought I replied to your DM but I don't see it in my mailbox now.

    From what I can tell, the rectangular bar code in your image is classified as Data Matrix, ECC200, DMRE.

    I used Bartender to create an ECC200 type label and could NOT decode it with ZXING or the on-line decoder. ( https://zxing.org/w/decode.jspx )

    I WAS able to decode both your image and my Bartender rectangular bar code with a Datalogic GD4400 scanner.

    I tried updating my ZXING assembly from the old 14.0 version to the latest I could easily find (16.4) but it still didn't decode.

  9. Yes.

    4 years ago, I took my current job because the lone engineer supporting the manufacturing of my product suddenly left. I believe there were multiple reasons, but after I got involved I realized that I would have probably done the same if I was in his shoes.

    There was 10 test stations running 5 test sequences for three specific products.  The test system was based on an older version of CVI / TestStand with run time deployment that originated at a consulting company. The product under test has some diagnostics, but much is tested by configuring it as a customer would. The product software changed over time (as would be expected) and the tests stop working. Sometimes the changes demanded a change to the CVI code, but the previous engineer had no experience. Solution? Replace the functions that don't work with LabVIEW VIs!

    Move forward a couple of years and introduce new test requirements, new web based interfaces and the test system turned into quite a mess of curl, perl, CVI, LabVIEW and TestStand RTE.

    It got so bad that the product OS was locked down to a specific version of software and then upgraded at the end to the shipping software. CVI code would crap out when IT would install Windows updates (DLL hell).  Test times per UUT went from an original 30 minutes were now almost two hours. The product ended up being tested twice just so we could install the latest software before shipping. First pass yield was 80% with the primary cause being test execution failure; timeouts exceeded and Selenium/Firefox step failures because of JAVA updates or an operator changing the screen resolution.

    This is where I came in. I dubbed the existing system "The Jenga Pile". Touch anything and it all came crashing down. It's easy to say that this is an implementation issue. My manager and I decided that we needed to move to a current version of Windows / LabVIEW / TestStand, get rid of cludges like Selenium and curl and rewrite the tests from scratch based on the latest version of the product software.

    I was 6 months and 75% or so into the project when the only early advance of the new product software was given to me. Many of my tests were based on a telnet session to the product. Telnet was now REMOVED from the product! (not disabled, but REMOVED - security issues ). I needed to change to SSH or use the serial console port. LabVIEW and SSH (don't get me started)?!?!   

    I was already done with my test coding and intended to spend the last two months tweaking the TestStand environment and creating my first RTE deployment. I used SSH.NET here and there where necessary to recover. After taking 4 weeks to rework the telnet related issues, I found I could not reliably create and install TestStand deployments. SSH.NET would not work. Oops! Did I mention I had never created an RTE deployment before? My previous employer used debug deployment licenses and I never once dealt had to deal with the RTE.  Now I know why.

    I told my manager to cancel the plans for upgrading our 10 deployment licenses and spent 4 weeks in overdrive creating my own crude sequencer and report tools. I saved the cost of NI upgrades, met the deadline and made my life easier down the road.
    Changes now only require compiling a new EXE and dropping it on the test stations. I have a UI that I can change, deploy in minutes and I haven't had to think about Process Models or creating 100s of MB of deployment packages for simple changes.  It is not actor based or even very modular at this point.

    I love the promise of TestStand, but I'm not controlling a super-collider and certainly not doing rocket science. Give me TestStand Lite (google it) as an actor where I can register some LabVIEW UI elements to monitor and control execution.

    Give me TestStand Lite, limit the engine to running LabVIEW VIs and include the engine in the Professional edition. The TestStand Lite sequence editor should integrate seamlessly into the LabVIEW project.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.