Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,871
  • Joined

  • Last visited

  • Days Won

    262

Everything posted by Rolf Kalbermatter

  1. In newer versions of LabVIEW there should be a property called something like "Auto Adjust Scales" or something. This is a Boolean and can be set on or off. You can also set that property in the graphs right click pop-up menu under Advanced. EDIT: Seems I can't find that property. Probably must have dreamed about it. But the pop-menu definitely works. Rolf Kalbermatter
  2. Even pre 8.2 there is seldom or never the need for a CIN. Unless you work in LabVIEW prior to about 5.1, you can pass native data to a DLL too. LabVIEW having to pass C types to a DLL is the main reason why a DLL call can get slower. The only CIN features that were not reallly available before 8.2 in some ways for DLLs, were about dealing with asynchronous driver implementations (the CINAbort() function) and that has nothing to do with performance but about creating drivers that can be aborted when the LabVIEW context goes idle somehow. However that is not a magic bullet but needs to be designed into the CIN or DLL very clearly from start. In terms of performance there is basically no difference between calling a CIN or a DLL, provided they both implement the same functionality and in the same way. That said for many algorithms you can not gain much by putting it into C since LabVIEW itself is a compiling language. LabVIEW does not do optimizations in the same amount as highly optimized C code can have so for some routines there is some gain to be had, but in general if you start to look into this you should first have a look at the implementation of your LabVIEW code, as there is a good chance that you simply did the most easy algorithm in LabVIEW instead of the most optimal one. The main reasons to use C code is in fact either because you already have the C code, or because the C API you want to interface to is to complicated to be easily interfaced with LabVIEW directly. The decision to use LabVIEW native data types to pass to the C code can make those calls even faster (if you know how to deal properly with those data types in the C code, a bad implementation can cause memory leaks very easily and/or be even slower than having LabVIEW pass a C pointer to the function instead). In any of these cases except asynchronous driver abortion, there is no reason why a DLL wouldn't work either unless you work in pre 5.1 stone age. Rolf Kalbermatter
  3. The problem I see here is that LabVIEW is compared often with the just a little more than Express Edition of Visual Studio. I do not feel this to be a legitimate comparison for the way we use LabVIEW. With Visual Studio you usually would end up adding other costs such as UI Widget libraries, extra development libraries, etc. Sure you can get some for free but here too, you usually get what you pay for. The other route is to go completely GNU with GCC, and sourceforge and other OSS libraries. This is absolutely a workable path, but to set up an environment in that way that "just works"TM, is quite a bit more work than doing the same with a professional software IDE like Visual Studio Professional Edition, that last time I checked did cost quite a little more than 250 $ (and that was without any MSDN subscription). Developing something in JAVA has certainly its merits (such as even better platform independence than LabVIEW), but I remember the OP to have had issues about the very small LabVIEW subroutine calling overhead and with such requirements I'm sure JAVA will not make the code any faster at all. Note: As Microsoft Partner we do have access to the Visual Studio suite (and many other MS software) but still do the main development in LabVIEW. The Alliance Partner program comes IMHO close to that Rolf Kalbermatter
  4. You need IMAQ Vision 8.6 for use with LabVIEW 8.6. IMAQ Vision 7.1 has no way to include VI libraries for LabVIEW 8.6 since it was released around the time when LabVIEW 7.1 was released, and that is at least 5 years ago. Rolf Kalbermatter
  5. It would seem you have run into a big problem here. The only instance that knows about the IP address mapping of a dynamically assigned IP adress is the DHCP server that issued that address. The way this is usually solved is, that the specific client is known under a DNS name that it provides to the DHCP server when asking for an IP address. The DHCP server then will inform the DNS server about this new address. Since this seems no option in your case there are only very few other feasible and sure to work things: 1) adding some address recovery support to the devices yourself. Basically this would be some UDP broadcast message that your devices reacts too. The device will expect a specifically formatted packet on a well defined port. The package could contain a specific MAC address or a place holder to let all devices that receive it respond with a packet that contains the MAC address and the current IP address. This is obviously something most device manufacturers do not make public but since you are to test devices that your company manufacturers this should not be the problem. Since UDP braodcasts are usually not routed, you can get in trouble with this in a setup with multiple IP subnets and would need to use TCP instead but that does not have an universal broadcast mechanisme like UDP. 2) Another solution is to simply add some remote IP update mechanisme. Things I have seen are for instance where the device will react on a specific size ping message to update its own IP address with whatever address that ping contains. This may seem a bit useless since you usually simply specify the IP address in the ping command. But this works in such a way that you first manipulate the local arp table in your computer. You can do that with the arp command line program (or calling into low level Windows network APIs, but this solution is very cumbersome). First you make sure to delete any reference to the desired MAC address from the arp tables. Then you add a new arp entry with the specific MAC address and the desired IP address. When you now issue the ping command to that IP address (using the data packet load size your device is setup to recognize as the magic change my IP command), the device receives that ping packet and changes its internal IP address to whatever IP address the IP header of the ping packet contains. This obviously will only work in the local subnet since the arp tables are not used when contacting a device in a different subnet but instead that request is passed on to the subnet gateaway. Another theoretical solution would be maybe the use of InARP which is the Inverse ARP protocol. How to set that up and make it work I do however not know. Rolf Kalbermatter
  6. I noticed quite a few posts in the LAVA 1.0 board that were originally made by Ton Plomp but now have LAVA 1.0 as author. I would expect something like this to have happened to Ben too. I'm not sure if I "lost" any posts. I do not keep a close tag on them . Rolf Kalbermatter
  7. Actually there is quite likely some reason. I do see that it seems not just .Net is getting involved but also COM. And from my workings with COM to interface to WIA I have encountered various strange things that seem to relate to the fact that the COM interfaces get sometimes into trouble to marshal its calls between LabVIEW and the out of process target component. I could trigger this very reliably when debugging C calls into those COM methods when the Call Library Node was setup to be reentrant and as soon as the Visual C debugger kicked in. It could vary between COM methods failing with some marshalling errors and a complete lockup of LabVIEW and the debugger. The solution for me was to set all Call Library Nodes to be non reentrant during debugging and set them back to reentrant for the final library. Setting all calls to be non-reentrant did not entirely fix this, but it made it possible to run at least far enough to debug the issues at hand. There were still some COM methods that sometimes (not always) failed with some marshalling errors. I could imagine that the execution context in which QuickDrop is running might have some issues to run .Net and ActiveX methods reliably due to some synchronisation issues. Especially the marshalling of COM data is always driven by the message pump of the calling application and that is one really hairy part in LabVIEW to deal with. I'm glad I never will have to work on that code. Rolf Kalbermatter
  8. So the real solution would then be to add an endianess selector to the Typecast? Running and hiding! Rolf Kalbermatter
  9. Yes I meant for sure Full Duplex and the USB Bus is able to do that. So if you have an USB to RS-485 interface with 4 wire output (I think it is an oxymoron since RS-485 usually implies 4 wire connection), then there should be no problem in having real Full Duplex operation. The converter will need a little intelligence and buffer to store packets as they are transmitted over the USB bus back and forth, but for the normal observer it will look like real Full Duplex. Bi directional as you seem to define it here makes of course little sense in such a setup. Rolf Kalbermatter
  10. You have the hardware and hopefully the documentation, so you are about 2 light years ahead of us, to make this work! Sorry but with so little information it is absolutely impossible to help you and I doubt there is anyone else with this hardware setup, who is reading this board AND has tried to make this work in LabVIEW AND is willing, able and has time to share his solution. So you will have to try harder to make this work. Rolf Kalbermatter
  11. Well, for that you have 4-wire on the RS-485 side, and while real bidirectional transfer over the USB bus is obviously not possible, the USB bus should, with the right driver, be fast enough to give to the system the impression that it is indeed bidirectional. I haven't used bidirectional communication where the timing was so critical that quasi-bidirectional operation would have caused trouble. IMHO such a system could never work unless all involved parties are 100% real 4 wire devices and their firmware is exactly the same. But that would be highly useless, unless you want to create a protected, private, proprietary communication network. Rolf Kalbermatter
  12. While this are all nice to have things, it is not such a big problem to structure your local caching in such a way that you can minimize possible dataloss, when your system decides "to go out for lunch". It's definitly not even close to the time you would have to spend to port SQLlite into a proven, reliable RT solution for your target. Of course this assumes that you would do the porting . Hoping someone else will do it because of the nice technical challenge this provides would require less time on your part but has a very good change to happen somewhere between now and the end of all worlds . It would be different if such a solution would be possible to be commercialized but I see little chances for that. Well when I talk about an SQL engine I also consider a direct DLL implementation as such. For me it is not the fact of a deamon like implementation that qualifies as engine but the implementation itself. And I agree deamons are in general not a very good idea on an RT system, unless you know exactly what they do and how they could interact with your system in the worst case. Rolf Kalbermatter
  13. Unfortunately this only returns NULL as serial number on my XP SP3 system, independent of the log in I use. And I have tinkered quite a bit with IOCTL_STORAGE_QUERY_PROPERTY myself but it just doesn't seem to work on my machine and with that harddrive, except when using directly SMART, but for that I need to open Physicaldrive0 with READ and WRITE access and that fails without admin rights. Rolf Kalbermatter
  14. I haven't played with the BIOS serial numbers yet. I did some tests with my OpenG port IO functions to read directly the physical memory to get to the BIOS information. But this requires a kernel driver that you can only install with admin rights (and elevated admin rights in Vista and higher) and most likely loading that kernel driver is also a Windows privileged operation, just as opening the PhysicalDrive. I was able to read various BIOS information in that way and the BIOS serial number was also part of it, but I was of course logged in as admin. Rolf Kalbermatter
  15. Not exactly sure what you mean, but it should be fairly easy to modify those VIs to actually download a binary stream after they get the initial html header. Rolf Kalbermatter
  16. Well I once was fairly fluent in vi commands but if I sometimes happen to get in that mode nowadays, I look pretty silly. Talk about a non intuitive edit flow! Rolf Kalbermatter
  17. Actually I was thinking the same, but in the case of loading a .Net assembly in the LabVEIW development environment 8.0 or higher, it should in fact use the project directory rather than the LabVIEW executable directory, according to some posts by Brian Tyler, the principal architect of the LabVIEW .Net interface and at that time THE NI .Net specialist, before he went to MS. But I have to admit I never tested that thoroughly. My only .Net exposure so far was really the development of a binary .Net interface module for LuaVIEW that had mainly to run under LuaVIEW and LabVIEW 7.1. Rolf Kalbermatter
  18. You can actually do that with the Windows Message Queue example on the NI side. Since it normally hooks the VI window there are events that it never will see, but hooking the application window should be possible too, albeit a bit more troublesome. (and if you mess up that hook you can hose LabVIEW pretty badly , as I know from various projects where I went that path). Rolf Kalbermatter
  19. Just a few remarks here, although I'm not a .Net Guru at all. I'm not sure if app domain is the right word here but LabVIEW indeed registers the project directory for private assemblies. .Net basically only allows two locations for .Net assemblies and that are the GAC and the private assembly directory, which MS might call app domain. Obviously there is no project directory in a LabVIEW executable and therefore the executable uses the default .Net private assembly directory which is the executable directory itself. One additional complication is that you can only add strongly named (fully versioned and all) assemblies to the GAC. Supposedly, all these restrictions are to avoid DLL hell. The problem here might be really that the .Net reference stores the relative path from the project to the assembly, which would explain the mess you get on different computers causing recompiles all the time. Rolf Kalbermatter
  20. I'm probably missing the point in this. What do electrons on the pavement have to do with a proper bidirectional USB protocol? Rolf Kalbermatter
  21. The problem of this setup as it seems to me is that the data resides on the RT system. This has a number of limitations: 1) How do you backup that? No IT provided backup scheme is going to plug into your RT system easily to have that data backed up automatically. 2) How do you access the data? The data may be stored by the RT system but usually the results are not interesting to the RT system anymore but instead to your test department, calibration services, production management, etc. Accessing the data on your RT system by them will require them to use custom made tools by you to retrieve that data and copy it into a more normal database so they can perform their queries, and data mining tasks on them. 3) Assuming you do not need this data external to the RT system, why would you need a relational database at all? That are the fundamental problems of this question. There are also technical ones and building the SQLLite into a shared library that can run on those systems is probably not the biggest of them. More interesting to me would be the stress on the system caused by such a DB engine continuously running in the background, how long the various storage media (some are still pretty simple flash media) will last with a DB engine continuously reading and writing to it, etc. Also assuming you want to do some really usefull stuff with that DB on the RT system, there is a lot more than just making the C code compile, link and run on the RT system. Making sure the resulting engine will really do the right thing independent of all the specialties and constrains of an RT system such as for instance endianess (yes the VxWorks RT targets are all PPC based and use big endian where as all the rest of LabVIEW nowadays uses x86 with little endian), will be likely a lot more work than to get the C code compile and run would be. You do not want to trust your vital data to a DB engine that will eventually crunch your data to an unrecognizable mess on some border condition. All these are things that would need to be investigated very thoroughly before you start to spend many days of working power to port the C sources to compile and run properly on an RT system. Can you explain to me your use case for an embedded relational database more clearly? I would like to understand that issue. For what I have used DBs so far, it was either data logging (historical DB) or managing test results (where a relational DB may come in handy) but those data are typically needed by people that have no direct access to the RT system and so storing it in a network database is a much more sane approach. So what I usually do for test data, is to store the data locally in a buffer and cache it regularly in a simple binary file format. This data then gets regularly transferred to the host application and from there stored in a network database using an ODBC connection. Rolf Kalbermatter
  22. Walking the registry, while not necessarily to bad, is still a bit of work and doing that regularly to detect addition or removal of drives seems a bit expensive to me. And the registry unfortunately doesn't help in my case either. The serial number is not in there!!! Damn I know that feeling!! Rolf Kalbermatter
  23. I would agree with the previous two post. Full Duplex support is important for industrial applications but with a little onchip buffer this should be doable. Also the termination debugging is a good feature. Are there internal termination resistors and are they possible to be disabled? What about bias resistors? The 5V power output on the port would be nice. Doesn't have to be high power, just a few mA in order to add some bias resistors to the setup on the other side. Rolf Kalbermatter
  24. I see you are using the IOCTL_STORAGE_DISK ioctl codes now. Unfortunately that seems not to work for some (all?) SATA drives, at least on my computer the serial number field always remains empty using that method. This seems to be a known issue with the STORAGE class device driver API not returning the serial number for SATA drives. In short all my research has not brought any reliable method to return the serial number both without admin rights and for all types of built in drives. Don't even talk about USB connected drives, they return the model number devided up in vendor ID and ProductID, which have a meaningful meaning for USB sticks, but divide the ModelID into two parts for USB connected HDs and never return any serial number. Maybe they improved the storage device driver API in Win7 to also return the serial number but at least on XP it will not work for SATA drives. Rolf Kalbermatter
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.