Jump to content

Reds

Members
  • Posts

    53
  • Joined

  • Last visited

  • Days Won

    3

Reds last won the day on August 5 2022

Reds had the most liked content!

Profile Information

  • Gender
    Not Telling

LabVIEW Information

  • Version
    LabVIEW 2021
  • Since
    1999

Reds's Achievements

Explorer

Explorer (4/14)

  • Conversation Starter Rare
  • First Post Rare
  • Collaborator Rare
  • Reacting Well Rare
  • Week One Done

Recent Badges

8

Reputation

  1. In theory maybe.... But how would I get LabVIEW to create QUIC connections?
  2. No - I've always maintained completely separate office and development machines on two separate physical Ethernet networks. In my opinion, that will be unavoidable in the modern cyber threat environment. You can't give people (or even yourself) local Admin account access on any network that is used for browsing the Internet, reading email or doing normal office stuff. Personally, if I'm doing Dev stuff on my dev machine, I'll pop open a RDP session into my office machine to do normal office stuff. Devs needs to have Admin account access on their dev machines. Devs should never have Admin account access on their office machines. Full stop.
  3. I have a system I've been using for years that works pretty well. Windows has a very capable boot manager. Once setup, it will prompt you to select which Operating System you want to boot into at system startup. You can have a different copy of Windows installed on basically every partition on your GPT formatted hard disk. If you start googling around for the "bcedit" and "bcdboot" Windows commands, you'll quickly find yourself deep into this rabbit hole. Sadly, I have the options for those command line tools memorized because I do this so frequently, and for so many years If you really want to get sophisticated, you can also use the Windows boot manager to boot *directly* into a VHD - without any host operating system running. This allows your VHD *direct* access to all the NI hardware without any translation layers in between. This is known as "Native VHD Boot" in the Microsoft documentation. If you're booting directly from VHD's - these VHD's are portable to new machines. You're just going to have get very familiar with the bcdedit, bcdboot, and diskpart command line tools. It's all doable, and it all works, but these tools will start to make your feel like you've living in Linux world, and not Microsoft world . It just takes time to learn and get familiar with the tools. I've been taking this approach for many years - I can't even imagine living without these capabilities. I don't know why anyone would mess around with docker or any of that stuff. The capabilities built directly into Windows meet 100% of my needs. The only special thing you need is a generously sized SSD, formatted with many different partitions (one partition per Windows copy).
  4. I have a LabVIEW application that produces a continuous stream of binary measurement data. Right now, this measurement data streams into a LabVIEW notifier, and multiple "listeners" can "subscribe" to that data and monitor what is happening. Both the producer (publisher) and consumers (subscribers) are all native to LabVIEW right now, so it all works great with LabVIEW notifiers. But, now I want to have a non-LabVIEW listener. What to do? The semi-obvious choice would be to switch the code so that the data would be streamed on TCP/IP instead. But what higher-layer protocol would you want to use for that? It seems to make like the RESTFul and gRPC models are more client-server oriented. I want something this a very loosely couple producer-consumer model, where you have one producer and any number of listeners. What are some good candidates?
  5. Thanks Greg, appreciate it! I think I need to figure out how this MASM toolkit would be installed with our EXE on customer systems. If I can integrate the MASM installer into our LabVIEW installer package, this could work for us. I'll investigate... You make a good point about single precision being all that's necessary also.
  6. Interesting, thanks Rolf! So then....I wonder if lvanlys.dll is not written to call the multi-core version of the FFT in MKL? Intel vTune Profiler also provides the hard evidence: And if this is not a failing grade for lvanlys.dll, I don't know what is:
  7. But if I use Resource Monitor to look at the DLL's my LabVIEW-built EXE is calling, none of the DLL's have "MKL" in the filename. It seems to me like LabVIEW is using our old friend "lvanlys.dll" to perform FFT calculations. Can anyone confirm my suspicion??
  8. If anyone else is interested, here's some evidence from one of my machines, indicating that NI is installing various versions of the Intel MKL. They look reasonably up-to-date. So I'm still unsure why they're not taking advantage of multi-core...
  9. "Documentation is aspirational" is a great line that I'm totally stealing. 😂 I laid down the big bucks for the top-of-the-line 18-core PXIe-8881. I feel like the engineering gods are mocking me as 17 of the 18 cores are idle when I run my FFT. I mean, what is the point of an 18-core PXI CPU if NI's default math library can only use one of them? Is there any other T&M application besides math/analysis that would actually benefit from 18 cores? Maybe I need to use MatLab to access all 18 cores? The whole thing is kind of crazy if you ask me.
  10. Thanks Rolf. i was actually looking at switching to the Intel MKL (instead of LabVIEW native) in a bid to improve multi core performance. But if LabVIEW is already using MKL, I wonder why it doesn’t seem to take advantage of multi core for FFT?
  11. Does anyone know which math library LabVIEW uses to do the FFT and vector math operations? Has it been updated over the years to accomodate the latest Intel CPU extensions, or has it been static over time?
  12. Yeah, I wish that was possible. The problem is that a third party analysis application can't understand the first 100kB of the file, and so that software incorrectly concludes that the entire remainder of the file must be corrupt.
  13. The jumbo file is recorded with a bunch of header data starting at file offset zero. This header data is not actually useful, and it actually causes a third party analysis application to think that the recorded data is corrupt. If I can manage to delete only the header data at the beginning of the file, then the third party analysis application can open and analyze the file without throwing any errors.
  14. Yeah, I dug into the Microsoft docs on sparse files, and I don't think that technology is going to solve my problem after all. Cool stuff. Good to know. But doesn't seem like it's going to solve my immediate pain. I guess what's really needed is a way to modify the NTFS Master File Table (MFT) to modify the starting offset of a given file. But, I didn't actually see any Win32 API's that could do that. I'm sure it must be possible to do that with some bit banging, but I'd probably be getting in way over my head if I tried to modify the MFT using a method that was not Microsoft endorsed.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.