Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,778
  • Joined

  • Last visited

  • Days Won

    243

Everything posted by Rolf Kalbermatter

  1. This kind of idea comes up frequently and various people have tried to solve it. In the end almost everybody ends up writing a simulator for his specific device and then moves on, since there are other projects to be done that deliver income. Writing a generic simulator sounds like a good idea, until you start to work on one. First problem, lots of instruments have very different ideas of how they want to behave on invalid input, wrong sequences and conflicting settings. Some simply go into a lock mode that you have to take them out of, sometimes with a literal kick in the ass by pushing the power button. Others will refuse wrong settings and some ignore it and act like there has nothing been sent. A few try to be smart and will change anything needed to make the new settings work, even though you told them in the previous command to go into this or that mode. The next problem is often that writing the specific command instruction file for your generic simulator is almost as trouble some as implementing it from scratch. Either simulator only supports the most trivial command response pattern and the command instruction file is trivial but the behavior of your simulation is far from close to your actual device, or it supports many modes and features and your command instruction file is getting complicated too. And almost with every new device you want to simulate you are pretty sure to need to go into your simulator to add an additional feature to support some quirk of your new device. The end result is that there are many people with good intentions to write such a beast and none who goes beyond a simply device specific simulator, and even that one is seldom really simulating the real thing close enough to be able to test more than that your program works if everything is going right, which in reality it often doesn't.
  2. That custom control editor is one of the more arcane parts of LabVIEW. It exists since at least LabVIEW 2 and has seen only small improvements since. And according to its original creator back at that time it is a pretty nasty piece of software code that would need a complete rewrite to be able to make more elaborate modifications to it. Chances for that to happen are about 0.0something % nowadays. Unless someone somewhere is willing to pour a few millions into this. 😎
  3. My snarky response would be LabVIEW NXG. 😀 More realistically you could consider the LabVIEW realtime and LabVIEW FPGA as parts of a bigger suite. BridgeVIEW morphed into the LabVIEW DSC (Data Logging and Supervisory Control) Module long ago (LabVIEW 7 or so), which is an addon, similar to Vision Control and such. But it is legacy: 32-bit only, not really improved for a long time, it is still using SQL Server Express 2012, which is not compatible to the latest Windows versions.
  4. I'm not convinced. They either have no idea about how complicated LabVIEW is or just hoped they could put some sand in the NI managers eyes by teasing them with what they thought is still NI's pet child.
  5. That is partly NI's work. They were pretty aggressive about defending their idea by applying for quite a few patents and defending them too. Of course if you go to the trouble to apply for a patent you have to be willing to defend it, otherwise you eventually loose the right to a patent anyways. And they did buy up some companies that had something similar to LabVIEW, such as DasyLab for instance, although in my opinion DasyLab didn't quite go beyond the standard "wire some icons together" similar to what HPVee did, and what Node-Red is doing too. But they tried to use some structures that were darn close to LabVIEW loops and that was a prominent NI patent. So NI eventually approached them and made them the offer to either buy them or meet them in front of a judge. The rest is history.
  6. Well, I do have a VI library that can talk VI Server, at around LabVIEW 6 to 7. The principle seems still pretty much the same, but there were a few zillion new attributes and methods added to the VI Server since, and classes, and what else.
  7. Actually, as far as the UI goes it depends quite a bit what you want to do. For fairly simple UIs that "just work" it's still an amazing easy system. If you want to support the latest craze in terms of UX design, then yes it is simply stuck in how things were 20 years ago. Basically if you need a functional UI it works fairly easy and simple, if you need a shiny UI then LabVIEW is going to be a pain in the ass to use. Nowadays Beckhoff and Siemens and others have their own UI solutions too, but in comparison to them LabVIEW is still shiny and easy. Beckhoff does have the advantage that their UI is HTML5 based so easy to remote but it looks like a stick figure compared to a Van Gogh painting when you compare it to a LabVIEW front panel. My dream was that they can develop something that allows the LabVIEW front panel to be remoted as HTML5, but seeing the Beckhoff solution I start to think that this project failed because they did not want to settle with simple vector graphic front panels but wanted to have a more native looking impression in the web browser. And yes documenting the VI Server binary TCP/IP protocol and/or adding a REST or similar protocol interface to the VI server would be an interesting improvement.
  8. No, Watcom C was used to compile LabVIEW and LabWindows CVI for Windows 3,1 because it was pretty much the only available compiler that could create flat 32-bit code at that point and absolutely the only one that supported this on the 16-bit Windows platform. They did not have a license to use the compiler in there and it was several years before Watcom faltered and a few more years before its sanitized source code was open sourced. It still exists as open source but activity on that project is very dead.
  9. LabWindows CVI did use a lot of LabVIEW technology originally. Pretty much the entire UI manager and other manager layers such as File IO etc were taken from the initial LabVIEW 2.5 version developed for Windows. On top of that they developed the LabWindows/CVI specific things such as function panels, project management and text editor etc. The C compiler was their own invention I believe. Later, around 2010 or so they used replaced the proprietary compiler in LabVIEW with LLVM as compiler backend and after that they used that knowledge to replace the LabWindows/CVI compiler with the same LLVM backend. Callback support in LabVIEW, while not impossible to do (ctypes in Python can do it too). always was considered an esoteric thing. And IMHO not entirely incorrectly. The problem isn’t so much the callback mechanisms itself. Libffi is a reasonable example how it can be done although I’m sure if you dig in there it takes you a few days to understand what is needed just in a Windows application alone. The much more difficult thing is to allow a configuration in the Call Library Node that doesn’t require a PHD in C programming to configure it correctly to match the callback function correctly and the mapping from the callback function parameters to the callback VI inputs and outputs. ctypes in Python is much more straightforward because Python is a procedural language like C and even there you have only very few people who can handle normal function calls and virtually nobody who gets the callback functions right!
  10. Sorry but you are talking nonsense here. I don’t mean the part about Jeff spinning off a new company with that money although I think that’s not very likely, but a Python based LabVIEW is just utter nonsense. LabVIEW is written for maybe 95% in C/C++ with a little Objective C and C# thrown in, and reimplementing it in another language would take only 10 to 20 years if you are lucky. If you want to make it reasonably compatible you can add another 10 years at least. And that is not even considering the abominable performance such a beast would have if you choose Python as basis. Reimplementing LabVIEW from ground up would cost likely 500 million to 1 billion dollar nowadays. Even NXG was not a full redesign. They tried to replace the UI layer, project management and OS interface with C# and .Net based frameworks but a lot of the underlaying core was simply reused and invoked with an Interop layer. And a pure LabVIEW based company would be difficult. The current subscription debacle was an attempt to make the software department inside NI self sustaining by implementing a serious price hike. At the same time they stopped LabWindows/CVI completely which always had been a step child in the NI offerings together with most other software platforms like Measurement Studio. They earned very little with them and did not invest much in them because of that, which made them of course a self fulfilling failure. There is only a relatively small development team left for LabVIEW, apparently mostly located in Penang, Malaysia, if I interpret the job offers on the NI site correctly and they are probably scared to death about the huge and involved legacy code base that LabVIEW is and don’t dare to touch anything more substantial out of well founded fear to break a lot of things with more involved changes. I think what LabVIEW needs is a two version approach. One development branch that releases regular new builds where developers are allowed to make changes that can and very likely will break things and to work on them and improve them gradually and once a year or even once every two years all the things that have proven to be working and stable are taken over into the LTS branch that professionals who pay a premium to use this version, can use with a 99% assurance that nothing important breaks. You want to play with cutting edge features and don't mind to receive regular access violation dialogs and other such features? You can use our development release which costs a modest license fee and if you promise to not use it for anything that earns you money you can even use the community license with it. You expect rock solid performance and no crashes and bugs that are not caused by other things such as third party additions or your hardware/OS? Here we have an LTS version! It does not contain the latest gadgets and features and toys but we can assure you that everything that is in it is absolutely rock solid. And yes this has its price but if you use this you can be sure to focus on solving your problems and not worrying about quirks and irks in our software!
  11. Well I can assure you that he was not the person who would blindly follow shareholder value and forget everything else. I have talked with him personally on several occasions and he was as approachable as you can get. When you talked with him it was not as a number in the employee list or as an expendable necessity to run his company but as a human and he felt honestly interested in talking with you, not just as a social nicety. He was the old style boss who cared more about his company and the people who worked there than sales and profit, which were a mere means to make his company and the people working there to succeed and prosper, not a means in itself. Could he have decided to stop LabVIEW? Probably if he had seen that it gets a sinkhole for his company. Would he have stopped it because it cost a bit more than the company can earn from direct license sales? Certainly not, since he did clearly see the internal synergies. So yes I’m sure he isn’t happy about what the current management has done. He likely stood behind the idea to diversify the old NI away from a mostly DAQ centered company that had pretty much reached the ceiling of that market and couldn’t grow much more But I doubt very much that he approved to start throwing away the very roots of his company in favor of new and bolder frontiers.
  12. Are you seriously asking me if my solutions have weak points? 😁 I'm outraged! 😜 Seriously though, the LabVIEW manager function variant and the Windows API variant have a very, and I mean really very very small chance that that function will be broken, removed, changed or otherwise made unusable in a future version of LabVIEW or Windows. The chance for that is however in the same order of magnitude as that you will experience the demise of the universe. 😁 The chance that LabVIEW eventually will cease to exist is definitely greater and in that case you won't have to bother about this anyways.
  13. The problem is your configuration of the last parameter of the HFAread function. This function returns the "value". For a string this seems to be the pointer to the internally allocated string as can be seen from your C code: const char *my_string; H5Aread(attr_id, type_id, &my_string); What you have programmed in LabVIEW corresponds to this: /* const */ char my_string[100]; H5Aread(attr_id, type_id, my_string); That's absolutely not the same! You want to configure this last parameter as a pointer sized integer, passed as pointer to value, and then use the <LabVIEW>\vi.lib\Utility\importsl\GetValueByPointer\GetValueByPointer.xnode. But! This function uses a special shared library as helper library that is located in <LabVIEW>\resources\lvimptls.dll. And the LabVIEW application builder keeps forgetting to add this shared library to an application build, assuming that it is part of the LabVIEW runtime engine, but the LabVIEW runtime engine somewhere along the way lost this DLL. See this thread for a discussion of getting the GetValueByPointer to work in a build application or alternatingly replace it with another function that does not need this DLL. And it depends also if the variable for the returned value is variable sized or fixed sized. There should be a function that can tell you if this is the case. For variable sized values, the library allocates the string buffer and you have then to deallocate it with H5free_memory(). For fixed size values you have to preallocate the buffer before calling the read function to size+1 bytes and even make sure to fill in the last byte with 0 and afterwards deallocate it with whatever corresponding function to your allocation. If you think this is complicated then you are right, if you blame LabVIEW for not doing all this for you, then you blame the wrong one. LabVIEW can't do this for you, it is dictated by the programmers of the H5F library and there is no way in the whole universe that LabVIEW could know about this.
  14. Well in a world where many consider bad publicity many times better than no publicity, you could be very right. For now the market thinks for several days already that NI is more worth than what Emerson offered. So their whole attempt of appealing to the shareholders to let them have their way does seem to have backfired.
  15. It's unclear since you still don't want to post VIs but only images so we have to guess. But I would say the array simply has no label. The node inside the event structure only lets you select named elements. The "CallBack" word looks like it may be a label, but it is probably just a free label like the "Test" in the image below.
  16. There is no real problem to create such an array directly in the callback function, to be sent through the user event. But you must understand that the actual array must be a LabVIEW managed array handle. Basically something like this will be needed (and you will need to adjust the typedef of the cluster to match what you used in your LabVIEW code, since you "forgot" to attach your code): #include "lv_prolog.h" typedef struct { int32_t status; double value; } DataRecord; typedef struct int32_t size; DataRecord elm[1]; } DataArrayRec, *DataArrayPtr, **DataArrayHdl; #include "lv_epilog.h" /* if the callback can be reentrant, meaning it can be called while another callback is already executing, use of a global variable for the data array is NOT safe and needs to be handled differently!!!! */ static DataArrayHdl handle = NULL; void yourCallback(.......) { int32_t i, size = somewhereFromTheParameters; MgErr err; if (!handle) { handle = DSNewHandle(sizeof(DataArrayRec) * (size - 1)); if (!handle) err = mFullErr; } else { err = DSSetHandleSize(handle, sizeof(DataArrayRec) * (size - 1)); } if (!err) { (*handle)->size = size; for (i = 0; i < size; i++) { (*handle)->elm[i].status = status[i]; (*handle)->elm[i].value = values[i]; } err = PostLVUserEvent(lvEventRefnum, &handle); } } The easiest way to actually get the datatype declaration for the array that you need to pass to PostLVUserEvent() would be to create a dummy Call Library Node with a parameter configured as Adapt to Type, and then right click on the node and select "Create C Souce code". After choosing where to write the file it will create a C source file that contains the necessary data type declarations.
  17. Or that there will be someone else who will offer more. There are several companies that could benefit from an NI integration at least as much and some of them could benefit NI a lot more than Emerson.
  18. They did manage to spell LabVIEW correctly in their letter to the NI board of directors, but that definitely doesn't mean that they are after LabVIEW. Emerson is a company that declared shareholder value as the holy grail of their philosophy. In order to make that holy grail true they need to grow and grow fast and that can not be done through internal grow. So they need to acquire others. They changed their strategy recently and divested themselves of several large divisions that they consider not being able to add the necessary grow to make their bold targets possible. They even gave up their headquarter for that. They looked around for potential candidates and NI was a very attractive target. They have a considerable internal value and potential grow but did underperform for several years on the stock market. So a relatively cheap buy for a lot of potential. The perfect target to take over to help realize external grow to satisfy the shareholder value promise. And being so keen on shareholder value they figured the best bet is to appeal to the NI shareholders to make them force the board of directors to sell to them. Except that going public with this puts them now in the seat of the hostile raider. And that is what they really are. They don't want NI to integrate all the services and products into their own corporate structure. If they get their mind, they will pick the cherries from the pie and throw the rest in the trash. And no, LabVIEW is not the cherry they are after. That's in their view more an old and withered rose that needs to get chopped off than anything else. It's also not the DAQ boards. What they are after is the test system division, a relatively young part of NI but with a lot of grow potential for quite some time to come. TestStand makes a good chance to be reused, LabVIEW is in that picture at best another test adapter provider inside TestStand besides Python and .Net. Their whole behavior sounds like the little child that sits in a corner and starts mocking because the world doesn't want to give him what it feels is his natural right to have. Somehow it would seem to me that if their target doesn't feel like wanting to sell their company for the price they offer, they have exactly two options: make a better offer that they can't refuse or walk away. But they choose to instead stamp with their feet on the ground and be very upset that their "generous" offer wasn't welcome.
  19. Basically the same as Jacobson said. The DMA FIFOs internally are 64-bit aligned. If you try to push data through it that doesn't fit into the 64 bits (8 * 8 bit, 4 * 16 bit, 2 * 32 bit or 1 * 64 bit) then the FPGA will actually force alignment by stuffing extra filler bytes into the DMA channel. In that case you would loose some of the throughput as there is extra data transferred that simply is discarded on the other side. That loss is however typically very low. The worst case would be if you try to push 5 byte data elements (clusters of 5 bytes for instance) through the channel. Then you would waste 3/8 of the DMA bandwidth. The performance on the FPGA side should not change at all purely from different data sizes. What could somewhat change is the usage of FPGA resources as binary bit data is stuffed, shifted, packed/unpacked and otherwise manipulated to push into or pull from the DMA interface logic. The performance on the realtime side could change however as more complex packing/unpacking will incur some extra CPU consumption.
  20. I had the same reaction. Not the System Engineering Group that tried to make DCAF and similar. They have a different division that you and I haven't seen much of yet but that makes complete test systems for EV, semiconductor and other high value potential industries. I have no idea if they use LabVIEW in them. I'm sure they do use TestStand and probably some LabVIEW adapters to interface to hardware components, but it's definitely NOT a LabVIEW solution. That's very ambiguous, depending on who you directed this message to. 😁
  21. Possibly, but I think it is not even CompactRIO or any of the DAQ boards. Maybe PXI, but most likely neither. What they want is the test system division which makes complete test systems for the EV, energy and space markets. This are high value systems with an interesting cash flow and a significant grow potential. They want that type of business to make their share holder expectations true. Selling DAQ boards or even LabVIEW license subscriptions doesn't earn enough for that. Most likely the NI board are already in communication with companies like Danaher, Roper, Agilent, Keysight and others about their potential interest to save them from the Emerson hostile takeover bid. Not sure that either would be better for us users but it has definitely the potential to get a significantly higher share price for the share holders and that is/has to be the main concern of managers in a public traded company, as otherwise they might face accusations of mismanagement and legal actions by the share holders.
  22. https://www.emerson.com/en-us/automation/automation-and-control
  23. I'm fully aware of that. It just means that this could be an additional hurdle for them to take, not that it is the only one. In terms of LabVIEW, what I think would happen, if they are successful is the same as has happened for HiQ, Lookout, Electronic Workbench/Ultiboard and a few others when NI took them over. They hailed the purchase as a great addition to their product portfolio, took out some of the IP to integrate in their own and then let it die. Trying to interface Emerson products with LabVIEW in the past simply gave you a blank stare from the Emerson people and the question: "Why would you want to do that when we have such nice software ourselves!"
  24. I assume you were looking for the ironic icon and couldn't find it? 🤑
  25. Yep. But a very significant amount of shares (if not even a fully controlling amount) was at least until a few years held by the three founders and in the form of trusts through family members. So it will probably come down to the question if any of the heirs feels enough to sell their share to Emerson for some quick cash or not.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.