Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation since 07/11/2022 in all areas

  1. To all things there is a season. Jeff Kodosky helped found National Instruments and invented LabVIEW. He inspired hundreds of us who shapes its code across four decades. But Jeff says it is time to change his focus. Today, NI announced Jeff’s retirement. He will probably always be noodling around on LabVIEW concepts and will remain open to future feature discussions. But his time as a developer is done. Maybe you didn’t know that? Jeff still slings code, from big features to small bugs. He’s been a developer most of the years, happy to have others manage the release and delivery of his software. I spent over two decades working at his side. He taught me to look for what customers needed that they weren’t asking for, to understand what problems they didn’t talk about because they thought the problems were unsolvable. And he built a team culture that made us all collaborators instead of competitors. Thank you, Jeff, for decades of brilliant ideas and staying the course to see those develop into reality. Your work will continue on as one of the key tools on humanity’s expansion to Mars.
    11 points
  2. The conference went well. We got lots of good video, but it will take a while to edit. I don't have an exact timeframe, but they should be posted within the next month or so. We had an extra cameraman and better lighting and angles this year, so the videos should be even better than last year.
    6 points
  3. 5 points
  4. It's probably my limited command of the English language, but for me this sounds about as intelligible as a dissertation about the n-th dimensional entanglement of virtual particles between different universes.
    4 points
  5. I don't know about you guys but I hate writing strings. There's just too many ways to mess it up and, in this case, it might just be to tedious. So, I wanted to create a way to make SQLite create and insert statements based on a cluster. The type infrencing code was based on JDP Science's SQLite Library. Thanks! create and insert from cluster.vi
    3 points
  6. I've been surprised today with one of the LabVIEW's most useful functions (imo) which I use all the time. After so many years and only now seeing this behavior/feature. I thought I share it 🙂 I've always used an empty array of N-Dim for my desired type input. only to accidently find out today that I can also use a scalar for the type. ha!
    3 points
  7. This works but is bound with troubles. A LabVIEW array is dynamic as LabVIEW is a fully managed programming environment. No it is not .Net managed, at the time the LabVIEW developers designed the basics that are valid until today, .Net was not even an idea on earth, let's not talk about a fact. But it is managed and the LabVIEW runtime handles that all behind the curtains for you. This means that a LabVIEW variable, and especially a handle that arrays and strings are, is only guaranteed to be valid for the duration of the Call Library Node call itself. After that node returns, any passed in array, string or even scalar variable can at any point be resized, relocated or even simply get deallocated. So the pointer that you get in this way can very well get invalidated immediately after that Call Library Node returns. For performance reasons, LabVIEW tries to maintain arrays and strings for as long as possible when it can, but to decide if it can and if it propritizes this rule above other possible rules to improve performance is a tricky business and can even change between LabVIEW versions. It is pretty safe to assume that an array or string wire that you wire through a Call Library Node, doesn't branch into other nodes and is wired to the end of the current diagram structure, is left untouched for the duration of this diagram structure. But even that is not something the LabVIEW management contract guarantees. It's just the most prudent thing to do in almost any case to not sacrifice performance. Once you have a branch in the wire before or after the Call Library Node to retrieve the internal data pointer in the handle, or you do not wire the array data to the diagram structure border, all bets are open to if and when LabVIEW may decide to modify that handle (and consequently invalidate the data pointer you just got).
    2 points
  8. View File LV muParser LV-muParser provides a simple LabVIEW API for muParser fast math expression parser. A modified version of muParser v2.2.5 is included. It will be installed to your "<LabVIEW>\resource" directory. I have added support for the "!" (not) operator as well as added ":" as a valid character for variable names. You will find the muParser API in the functions palette under "Addons > LAVA > muParser" muParser: http://beltoforion.de/article.php?a=muparser LV-muParser source on github: https://github.com/rfporter/LV-muParser Submitter Porter Submitted 08/25/2017 Category General LabVIEW Version 2015 License Type BSD (Most common)
    1 point
  9. Try another USB port or a powered hub. It seems that something in the USB communication is getting messed up somehow. There are many possible reasons aside from actual firmware bugs in the Keysight device. - your USB bridge in the computer is messed up, faulty or has bad drivers - the USB port you have your device connected to is a low power USB connector and the Keysight likes to draw more power than your computer can provide. Many USB ports perform not as they should according to the specs (and so do devices, sometimes drawing high transient power surges that are not allowed according to the USB specs). - The device might have a firmware bug, try to find if Keysight has a newer firmware available and upgrade your device if that is possible. NI-VISA is one of the few software drivers that never was critical in having to match the LabVIEW version closely. I would be very surprised if that has anything to do with your problem.
    1 point
  10. Have you read the documentation text file included? There is a function rand() which calls the LabVIEW Random Number node and returns its values. Do not expect crypto quality randomness from this. The LabVIEW Random Number generator has been repeatedly investigated and found to have a reasonable randomness but with a limited interval. For most simple requirements that is quite enough, but if you need real crypto quality randomness there would need to be done a lot more serious work and then you quickly can forget to find this as a free library. As to now() that's a bit tricky. The entire formula parser only really operates on doubles internally and doesn't have any other types. The newly hacked in bitwise operators were simply added by converting the double to an U64 for doing the bitwise operation then store it back as a double on the value stack. That should do for most bitwise operations for up to 32- bit integers but can start to get inaccurate if you chain many bitwise operators in a formula. So what would you expect the now() to produces? A double representing the number of seconds since January 1, 1904 GMT, as the LabVIEW epoch is? Or rather since January 1, 1970 UTC as the Unix epoch is? Or maybe rather the number of days since January 0, 1900 as the Excel epoch, or would January 1, 1600 UTC be better as the Windows SYSTEMTIME is? You see lots of possibilities and all of them equally right or wrong, so avoiding that problem by not implementing it is simply the easiest solution. 😁
    1 point
  11. View File TIC-TAC-TOE Game Tic Tac Toe is a fun kids game. It created it back in 2011 when I was interning and learning LabVIEW. Please feel free to download and play this game. TIC-TAC-TOE.zip Submitter leumaseoj Submitted 08/04/2022 Category General License Type BSD (Most common)  
    1 point
  12. So I'm dealing with calling some DLLs for external communication. When you call the DLL one function returns a pointer to an array of data. Using the MoveBlock I can successfully get the array of data back into LabVIEW. But one issue I'm having is that I also need to go the opposite way. I have an array of data in LabVIEW and I need to send it to the DLL. I realize the normal way you would do this is by calling the DLL with the Array Data Pointer on the Call Library Node. The problem is the data I need to reference, is actually a reference in the reference. In LabVIEW I have an array of U8 like this: [0]=0xFF [1]=0xAB [2-5]=Pointer to Data [6, 7] = 0x1234 I'm already calling the DLL passing in the array of 8 bytes using the Array Data Pointer, but within that array needs to be 4 bytes, referencing another pointer for another array of data. Is there a way to create a pointer in LabVIEW to some array of data? Then I can put it in my array, and send that? Sorry my C and DLL calling experience isn't as good as my LabVIEW experience and even finding the words to search for are difficult since it returns all kinds of information about MoveBlock, and LabVIEW control references. Thanks. Edit: Okay looks like I might need DSNewPtr to make a pointer, that I can then pass into the DLL. It is partially working, it just isn't the data that I put into the pointer.
    1 point
  13. You may want to try with this library. No guarantees about its proper operation. It's a quickly hacked together version from this library that I posted earlier. It's not really tested for the extra bitwise operators and there is no provision for correct left and right association of these operators, so it might require explicit bracketing to work as expected unlike in other languages and formula parsers that tend to follow the mathematical and/or C style conventions. LabVIEW 2018 for now. ExprEval.zip
    1 point
  14. For those interested in an announcement I mentioned at the start of this thread:
    1 point
  15. It's a skill we are born with over here. Along with back-handed compliments and damning with faint praise.
    1 point
  16. Ah. sorry. Looks like they renamed it. LabVIEW <version>\examples\Object-Oriented Programming\Reference Object
    1 point
  17. I'd first ignore LabVIEW and just look at the theoretical limit using command line tools. I used Iperf in the past between two computers with one setup to be the server, and one the client. The fact that you see 100% makes me think there is a different bottleneck.
    1 point
  18. TL;DR - Download the Linux Community Edition 2021 SP1 ISO and extract it. In the "INSTALL" and "utils/install_helpers.sh" scripts in the extracted files, replace all instances of "ubuntu" with "ubuntu|zorin". For those who don't mind a little bit of reading (I can be long winded): I don't know if anyone has tried this yet, but I figured I would post it for archival purposes and future LAVA searches. For those who don't already know, LabVIEW 2021 SP1 supports Ubuntu 18 and 20. It's worth noting, however that the VI Analyzer Toolkit is not supported on Ubuntu according to the "README.html" file that's included with the installation media. For grins, I tried installing it on Zorin 16 simply by running the normal "sudo ./INSTALL" command, but was met with "Sorry, LabVIEW is not currently available for this O/S and architecture...." In order to get LabVIEW 2021 SP1 to install on Zorin 16, just open the "INSTALL" and "utils/install_helpers.sh" scripts with root privileges, and search and replace all instances of "ubuntu" with "ubuntu|zorin", then run the "INSTALL" script as usual. My assumption is that this may work for all distributions based on Ubuntu Bionic or Focal by adding the conditional for what is returned with the following commands. In the example below "zorin" was returned: $ . /etc/os-release $ echo ${ID} zorin Disclaimer: Because ZorinOS isn't listed as an officially supported distribution, assume that it is not supported by NI.
    1 point
  19. Nice! While you are there please convince Elon to buy NI and turn it back into an engineering company 🤣
    1 point
  20. sounds like they were too busy updating NI Logo and colors to implement VISA. Oh well.
    1 point
  21. If NI starts talking about NFTs, web3 or cryptocurrency we are all finished...
    1 point
  22. I was unaware of this bug until today, but I figured it might be appreciated as a heads up in this sub-forum. There is an issue with LINX (or whatever it is now) and LV2021. The link below has detailed instructions for configuring a fresh pi as well as updating one that is already set up for 2020. https://forums.ni.com/t5/Hobbyist-Toolkit/Labview-CE-2020-connects-to-raspberry-but-CE-2021-does-not/td-p/4198964
    1 point
  23. Python. I know of very few LabVIEW positions in Europe for T&M. Very few vacancies are for LabVIEW now, generally. This seems to be a move to specific corporate customer types, like CERN. This will hit consultants. start-ups and small niche suppliers the hardest and say goodbye to most open source toolkits. With no new growth in uptake of LabVIEW and walled-off, future-proofing for existing customers, I see this as the death-throes of LabVIEW as eventually the corporate customers move away.
    1 point
  24. Well, it looks like that's just Windows. I can manually resize other app windows (Like Excel) to a small size and get the same thing. For instance, here's Notepad++ on Windows 10, where I have the mouse over the minimize button to highlight it, but the line is visible outside the window even if I'm not hovering over the button:
    1 point
  25. What you describe sounds very similar to our situation, except that we only have a single top-level repository for all stations. If you look at a single station repository of yours, however, the structure is almost the same. There is a single top-level repository (station) which depends on code from multiple components, each of which may depend on other libraries (and so forth). * Station + Component A + Library X + Library Y + Component B + Library X + Library Z + ... In our case, each component has its own development cycle and only stable code is pulled in the top-level repository. In your case there might be multiple branches for different stations, each of which I imagine will eventually be merged into their respective master branch and pulled by other stations. * Station (master) + Component A (master) + Library X (dev-station) + Library Y (master) + Component B (dev-station) + Library X (dev-station) + Library Z (master) + ... In my opinion you should avoid linking development branches in top-level repositories at all costs. Stations should either point to master (for components that are under development) or a tag. * Station A (released) + Component A (tag: 1.0.0) + Component B (tag: 3.4.7) * Station B (in development) + Component A (tag. 1.2.0) + Component B (master) <-- under development * Station C (released) + Component A (tag: 2.4.1) + Component B (tag: 0.1.0) Not sure if I misunderstand your comment, but you don't actually have to branch a submodule. In fact, anyone could simply commit to master if they wanted to (and even force-push *sight*). Please also keep in mind that submodules will greatly impact the git workflow and considerably increase the complexity of the entire repository structure. Especially if you have submodules inside submodules... In my opinion there are only two reasons for using submodules: To switch branches often (i.e. to test different branches of a component at station level). To change code of a component from within the station repository. Both are strong indicators of tightly coupled code and should therefore be avoided. We decided to use subtrees instead. For every action on a subtree (pull, change branch, revert, etc.) there is a corresponding commit in the repo. We have a policy that changes to a component is done at component level first and later pulled into the top-level repository. Since the actual code of a subtrees is included in the repository, there is no overhead for subtrees that include subtrees and things like automated tests also work the same as for regular repositories. You have the right intention, but if any developer is allowed to make any changes to any component, there will eventually be lots of tightly coupled rogue branches in every component, which is even worse that the current state. Not to forget that you also need to make sure that changes to a submodule are actually pushed. This is where UI tools become handy as they provide features like pushing changes for all submodules when pushing the top-level repository (IIRC Sourcetree had a feature like that). To be fair, subtrees don't prevent developers from doing those changes. However, since the code is contained in the top-level repository, it becomes responsibility of the station owner instead of the component owner. In my experience it's a good idea to assign a lead developer to each component, so that every change is verified by single (or a group of) maintainer(s). In theory there should only be a single branch with the latest version of the component (typically master). Users may either pull directly from master, or use a specific tag. You don't want rogue branches that are tightly coupled to a single station at component level.
    1 point
  26. WOOT! Just tested this MQQT library running on a simulated cRIO in a VM and it also works and can talk to my Azure IoT hub 🤩
    1 point
  27. Version 1.0.0.1

    437 downloads

    This Vi Calculate th Cp and Cpk values of a dbl array values. It's quite usefull to do some statistical analysis of a process capability.
    1 point
  28. I've posted this before but I can't seem to find where now. This is a VI that will read the Windows registry and return the development, and run time versions installed, along with the "Current" version, which is usually the last development version opened. LabVIEW Versions Installed.vi
    1 point
  29. LLBs and LVLibs solve different problems (and create different problems), and are not interchangeable or really related beyond sharing the word "library" in their acronyms. Here are some characteristics and comparisons of the two: LLB provides physical packaging containment of members, and does not address namespacing (nor scoping). LVLIB provides namespace containment of members (and also scoping), and does not address physical packaging. Both LLB and LVLIB impose static linkages that can be incidental and undesirable. These negatively affect load times (IDE and run-time), build times, and compile times. Anecdotally, it's greater than O(n) time complexity, especially when circular linkages exist between multiple such hierarchies, and most especially if the library hierarchy is nested (e.g., LVCLASS within an LVLIB, or nested LVLIBs) An LVLIB can be built into an LVLIBP. An LVLIBP is different from an LLB in that an LLB packs writeable, cross-platform* VIs capable of mutating to future LabVIEW versions, while an LVLIBP is a read-only, platform- and version-specific byte code distributable (which may contain the block diagram for debugging, except still remaining platform- and version-specific). An LLB may be used to pack libraries/plugins for deployment as application plugins, or as reusable libraries in development. An LVLIBP effectively is only used for the former. Neither LVLIBP nor LLB can pack non-LabVIEW-source filetypes as resources. Be mindful to account for both renaming/name-mangling resources, and also changes in relative path. LVLIBs (and LVLIBPs) render nicely in the LVProj tree, while LLB members appear indistinguishable from POVIs (plain ol' VIs). LLBs cannot pack two VIs of the same filename. This prevents packaging multiple LVCLASS hierarchies that use dynamic dispatch methods. This represents a few LabVIEW design limitations: 1) LLB's lack of an internal directory hierarchy for organization and packing of two filenames, and 2) LVCLASS using OS filename as the only unique identifier for method identification in a class (filename represents a good default value, but we need one more degree of indirection as a field within the LVClass XML; it's another discussion why this is so highly desirable to decouple source from OS convention). For actively-developed libraries, LLBs are bad because they exist a monolithic binary file. LVLIBs are bad because there exists no diffing or merging capabilities (this also applies to LVPROJ, LVCLASS, XCTL, XNODE filetypes. This is especially insidious, because popular DVCS clients autodetect the file format as XML and think "Aw yeah dude, I got this!" MERGE FAIL. Corrupted source. Be sure to turn off this autodetection for these filetypes.) LVLIBs can apply icon overlays to members. LVLIBs may be carefully designed to include strategic static linkages, including non-LabVIEW source files. This is one strategy to avoid managing the "Always Include" section of AppBuilder for distributables, especially as a convenience for end-user-developers of re-use libraries. But this fails by default because of the setting "Remove unused members of project libraries". Unchecking that often causes failure to build for non-trivial-sized applications linking to gargantuan LVLIBs shipping in vi.lib and as add-ons. So, the strategy may or may not work (it's coupled to whether or not you're keenly aware of and properly managing all application static dependencies) The reason I want to like LLBs is their ability to provide packaging constructs that provide higher performance on actual hardware. It's faster to load 1 file of size 100 units than 100 files of size 1 unit. It's also a more convenient distribution format -- a single file. (Also, I can't think of another language that effectively enforces a 1:1 relationship between method and physical file. LabVIEW requires substantially more clerical work to develop and refactor, for this reason) The reason I want to like LVLIBs is to enable namespacing and scoping beyond the LVCLASS level. Though, this namespacing always comes with the cost of static linking, which is perhaps the #1 problem for codebases of non-trivial size (do you see busy cursors while editing and wiring? long build times? load times? type prop errors? corruptions from application refactoring? heartache and heartburn generally?) Also, LVLIBP is neat in practice, but so narrowly scoped to specific deployment scenarios where it's acceptable to target version- and platform-specfic targets (version-specificity is definitely the bigger problem. every 12mo, we are afforded the opportunity to choose between obsolecense/migration/revalidation or just-plain-outdatedness). And without arbitrary namespace composition (namespace B and namespace C may both declare using namespace A; with namespace A unaware and none-the-wiser), it's not necessarily a compelling feature to begin with. (Corollary: an LVCLASS's ability to namespace and scope its members is desirable and good; but it becomes less necessary and more-likely-incorrect to continue namespacing and scoping at higher abstraction levels without namespace composition) Do LVLIBs Scale? Using LVLIBs in source on an actively-developed project raises barriers to both team scale and application scale. The cost of not using them is loss of scoping, which is avoided through communication and convention, and easily-detected if any actual problem were to exist. Another cost is loss of namespace, easily avoided through filenaming conventions (which is incidentally an industry standard on the web; prefixing library APIs with library-specific prefixes to avoid collision). Said another way, ROI diminishes and reverses to negative at scale, and opportunity cost has simple workarounds. I choose the opportunity cost. But... LVLIBPs! Another apparent opportunity cost of avoiding LVLIB in source is the inability to have LVLIBP as a distributable. Though, if you treat build/distribution as a second toolchain from the dev toolchain, the dev source can remain unencumbered by LVLIB, which is only added as part of the build process. I have mixed feelings on ROI here, but if LVLIBP makes sense for you, consider this strategy to make your dev experience noticeably more pleasant. Here's a real-world case study. This is from a Wirebird Labs client who gave permission to release this screenshot of a bird's eye view of their application analyzed using Links. What we're looking at in the screenshot below is an application with over 8000 application VIs (not including third-party dependencies). Libraries are identified by labels. Nodes represent a source file (mostly VIs, but also including LVLIB and LVCLASS and CTL), and connections between nodes represent static links as detected by the LabVIEW linker. This is a static screenshot of the application, but while running the physics engine lays out nodes as a force diagram. The strength of the force is based on number of static links existing between nodes, and a negative force is applied to nodes with no static links. This causes nodes to form clusters in space where strong coupling exists. What is the value of analyzing the application like this? Here is a list of issues we needed to solve: It took a long time to build. This made iterating costly, both in time and morale. Oftentimes, the build failed (anecdotally, a fresh warm boot of LabVIEW helped) The IDE was painfully slow during development; the cursor continually was "waiting" during wiring operations. The way we solved both problem was simply by taking a pair of "scissors" and snipping links between nodes. The types of links that we snipped were these incidental links introduced by packaging and namespacing facilities in LabVIEW: removing LVLIBs altogether removing VIs from LLBs calling concrete instances of polymorphic VIs rather than the parent removing public type definitions and utility VIs from LVCLASSes Within a couple days, we went from "kick off a build and go grab lunch" to "kick it off and get a coffee". The application and application framework had not changed to see these improvements; just the logical and physical packaging of dependencies. (In addition to solving the main performance pain points, additional areas for architectural consideration are easily visualized; that's beyond the scope of this conversation) Without LVLIBs, how do I avoid name collisions? I prefer this filenaming convention: Project-Class-Method.vi or Application-Class-Resource-Action.vi ... or generally, LeastSpecificNamespace-...-SpecificThing-...-VerbActingOnASpecificThing.vi For instance, Deploy-UpdateService-CheckForUpdates.vi or FTW-JSON-Deserialize.xnode. The name of the owning class just drops the -Method postfix. Is it ideal? It's neither terrible nor great. Some benefits are that filenames sort nicely, and it's easy to spot anomalous linkages. Semantic naming makes it easier for development tools outside the IDE (SCC client and provider, build toolchains). One downside is that your hand is forced on naming Dynamic Dispatch methods in classes (again, I desire to see this coupling separated by a degree of indirection in future LabVIEW versions). Conclusion? This area of LabVIEW does not have a general solution or general best practice. Be aware of tradeoffs of different strategies, and ensure they map successfully to your application space, stakeholder's needs, and team's sanity. Standing offer: Send me a message if you feel some of the scaling pain points: busy cursor while wiring build times lasting longer than 10min mass compile times lasting longer than 10min LVProj takes longer than 1min to load and within 2hrs of screensharing I reckon we could substantially improve your LV dev experience. I'm interested to further build tribal knowledge and provide feedback to NI on taking LabVIEW applications and teams to scale.
    1 point
  30. It’s sister function “Get LV Class Path” is similarly glacial for no obvious reason. As is “GetLVClassInfo” from the VariantType library. I’ve wonders if the problem is just that they call functions running in the UI thread for some reason. But it could also be root loop. The only workaround I see is caching; store a set of default-value objects in a lookup table and check against this before calling “Get LV Class Default Value”. I wish NI would put some effort into improving semi-crippled functions like these.
    1 point
  31. I think the thing that confuses people the most about these two primitives is the run-time behavior. You stated the proper behavior in your blog comment, but the distinction is at best very subtle the way you put it. Furthermore the help really distinguishes them by use-cases, not their underlying behavior which I think would be more helpful. To More Specific tests against the compile-time wire type, whereas the preserve run-time class tests against the actual run-time type of the object on the wire which might not be the same as the wire type. One is essentially a static type of cast where you're always testing against a known type, and the other is a dynamic cast which evaluates the type at run-time. However I hesitate to use the static/dynamic language because it carries with it carries with it some legacy baggage from C++.
    1 point
  32. My $0.02... FGs / AEs / LCODs have proven themseleves as excellent constructs for many years (config file library comes to mind). Yes, they can be abused (I can abuse anything). Yes, they can get out of hand with quick fixes (the quick fix is to blame not the AE construct). Just because we have another tool (e.g. LVOOP) in our bag doesn't mean we need to throw out the old ones... maybe we just need to protect them better as suggested above. BTW... aren't you the one who said "...[don't] throw the baby out with the bath water" awhile back? ~Dan
    1 point
  33. With LabVIEW 7.0 this is basically no problem. The functions to deal with .ico files are available in LabVIEW since about 6.0. Checkout vi.lib/platform/icon.llb. That are the same functions used by the application builder to read ico files as well as replace icon resources in the build executable. In LabVIEW 7.0 you also have a VI server method to retrieve the icon of a VI. Together these two things are all which are needed. There are however a few fundamental problems. The function to replace icon resource data works directly on the executable image (well really on the lvappl.lib file, which is an executable stub which is prepended to the runtime VI library and which locates the correct runtime system and hands the top level VI in that library to the runtime system). As such it can only replace already existing icon resources as doing otherwise would require relocating the resource table and its pointers, an operation which is very involved and error prone. Windows itself doesn't have documented API functions to store resources into an executable image, as this is a functionality not considered necessary for normal applications. lvapp.lib contains only icons for 16 color and 2 color icons for the size 16*16 and 32*32. Wanting to be able to have other icons would mean to add first those resolutions and sizes to lvapp.lib and improving the icon functions in icon.llb to properly deal with those extra resolutions. This is not really difficult to do. A different problem is that LabVIEW icons are always 32*32 pixels whereas Windows really needs 16 *16 pixel icons too, for displaying in the left top corner of each application window as well as in detail view. Rolf Kalbermatter
    1 point
  34. Update on the pricing changes. A Vision Development Module deployment license used to cost $440 ($110 with Academic discount). The new price appears to be $582. Now, since I am in Academia, the price that I see is different: This was a bit of a shock to me (remember, it used to cost 4 times less), so I tried to look for alternative sources of information. The page for quotes has a a phone number (which I called, more on that below) and an online quote link, which I followed. Being logged in as a duly registered academic user, the resulting quote specified that an Academic Discount was applied: Yet the price appears as $582. Which one is correct? I called. Long story short, the guy on the other end of the line did not know, then knew it was $582, but then when I mentioned that I had seen $407.40 on one of the page realized it was $407.40 and when I mentioned that the quote I had requested online showed $582, told me that then it would be $582. When I tried to explain that this was at the very least confusing and additionally looked like a rip-off, he simply hanged up. I am ambitiously engineering my replacement to the small part of the VDM I was using before I complete my migration to Python....
    0 points
  35. Just so everyone is aware of what the conclusion of this was, and thank you everyone for your help here. After lots of discussion with our NI rep and R&D, it was determined that R&D purposefully did NOT implement any VISA capabilities for the NI PXIe-4080 DMM, even the ability to enumerate the device. They recommended these two things, neither of which are good options for our architecture or requirements: Use NI's proprietary System Config API to dynamically find the PXIe-4080 DMM. I don't want to transfer my entire framework to this proprietary approach (nor do I believe it would cover all the bases VISA Find does). That's what a standard like VISA is for, which any PXI device should support (at least VISA enumeration/find). Create an INI/INF file using the VISA Wizard (https://www.ni.com/docs/en-US/bundle/ni-visa/page/ni-visa/usingddwtoprogrampxipcidevice.html), however I don't have access to a Certificate Authority (CA) to make that installable on Win10, nor can I even install the Windows Driver Kit (WDK) on my machine due to IT Security restrictions without particularly difficult approval. NI R&D refused to do the (relatively small) work to create this set of files to fix this oversight. So at the end of the day, this PXIe device is not VISA capable at all, and they designed it that way. Our project is moving to swap what PXIe-4080 cards we already have to PXI-4070s (which do support VISA enumeration/find/etc.), and future PXI DMM purchases for our setups will likely be Keysight M918xA's, assuming they play nice with NI-VISA on an NI PXI chassis. I wanted to let folks know that this model isn't fully compliant to the PXI standard (although they tried to claim that they meet the letter of the requirements in a particularly lawyerly way, but certainly not the way any NI customer would read it), and I'm a bit concerned this may be the case with future cards - be aware with NI PXI devices that they might not support VISA anymore.
    0 points
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.