Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,903
  • Joined

  • Last visited

  • Days Won

    269

Everything posted by Rolf Kalbermatter

  1. Well if those dialogs still work like they used to in older LabVIEW versions, the panel itself is a real VI front panel from one of the resource files but the implementation for it is a dialog window procedure written in C(++), which is why it can not be launched from another VI diagram.
  2. You like mega pronto saurus clusters, don't you!
  3. Nope! Array to Cluster is limited to 256 elements in its Right click popup menu. Of course you could add 4 clusters of 256 bytes each directly after each other.
  4. Well every structure can be of course represented by a byte array. But you don't always have to go through those trouble. Fixed arrays inside a structure are in fact best implemented as an extra cluster in LabVIEW inside the main cluster with the number of elements indicated between the square brackets and of the type of the array. BUT: if the number of elements get huge this is not practical anymore as you end up with mega pronto saurus clusters in LabVIEW. Then you have two options: 1) flatten the entire cluster into a byte array and before the call insert into (for input values) and after the call retrieve the elements by indexing into that array at the right offset. Tedious? Yes you bet! And to make everything even more fun you also have to account for memory alignment of elements inside the cluster! 2) Create a wrapper DLL in C that translates between LabVIEW friendly parameters and the actual C structures. Yes it is some work, and requires you to know some C programming but in fact less low level knowledge about how a C compiler wants to put the data into memory than the first approach.
  5. Yes, Yair's idea won't work. The array inside the cluster is fixed size and therefore inlined. Putting a LabVIEW array in there is not only once wrong but even twice. First a LabVIEW array is not just a C array pointer but really a pointer to a pointer to a long pascal byte array, second there is not an array pointer but an inlined fixed size array in the structure. So the correct thing to pass is a byte array of 1024 + 12 bytes as you have already figured out. And more correctly it might be actually totalFrames * (12 + 1024) bytes. Also eventhough you may not plan to ever use this in 64-bit LabVIEW it still would be useful to configure the handle parameter as pointer sized integer instead (and use a 64 bit integer control on the front panel to pass that handle around in the LabVIEW diagrams).
  6. Quite a lot of things wrong indeed. In the first you use Variant to Flatten to flatten the binary string that you read from the file into a Flatten data. But a Binary string is already flattened, turning it into a variant and then flattening it again is not only unnecessary but is synonym to forcing a square peg into a round one in order to force it through a triangle hole. I'm not familiar with this implementation of the SQL Lite Library but the BLOB data seems to be a binary string already so I would assume that connecting the wire from the read file directly would be the best option. Personally I would have made the BLOB function only accept a Byte Array as that is more what a Blob normally is. In that case the String to Byte Array would be the right choice to use for converting the File Contents binary string into the Byte Array. The second attempt is not flattening the JPEG binary content but the LabVIEW proprietary Pixmap format. This is uncompressed and likely 32 bits in nature so a decent sized JPEG image gets suddenly 6 to 10 times as large in this format. You could write that as blob into the database but the only client who can do anything with that blob without reverse engineering the LabVIEW pixamp format (which is not really difficult since it is simply a cluster with various elements inside but still quite some work to redo in a different programming language) is a LabVIEW program that reads this data and turns it back into the LabVIEW Pixmap format and then uses it. In the third you take a LabVIEW waveform and flatten that. A LabVIEW waveform is also a LabVIEW proprietary binary data format. The pure data contents wouldn't be that difficult to extract from that stream but a LabVIEW waveform also can contain attributes and that makes the format pretty complicated if they are present (and to my knowledge NI doesn't really document this format on binary level). The first is a clear error, the other two might be ok if you never intend to use anything else than a LabVIEW written program to retrieve and interpret the data.
  7. Depending on the LabVIEW realtime version used it may be necessary to add an extra configuration file to the system (this is for Linux based cRIOs but should be the same for myRIO): For FTDI based devices this usually seems to work out of the box, but your mileage may vary depending on the USB vendor and product IDs used by the manufacturer of the device. But the NI Linux realtime kernel has additional drivers to support other adapters too. The first check is to run lsmod from a command line on the controller (you know what SSH is, don't you) to see what module drivers are currently loaded. First run it before the adapter is plugged in after a full reset of the system and then after plugging in the adapter. There should be at least one new driver module loaded which has usbserial as parent. If this already fails the adapter is not recognized by the Linux USB subsystem. Once lsmod has shown a driver to be loaded make sure the adapter is plugged in and then look for /dev/ttyUSB devices by entering ls -l /dev/ttyUSB* on the command line. This should give a listing similar to: crw-rw-rw- 1 admin tty 188, 1 Oct 28 02:43 /dev/ttyUSB0 Important here is that at least the first two rw- are present and that the group (after the admin owner) is set to tty. If VISA recognizes the port but lists it as Not Present one of these settings is most likely wrong. To change the permissions with which a specific device is mounted by the kernel device discovery subsystem there are two different methods depending on which Linux kernel version is used. NI Linux Realtime 2013 This uses the old mdev system to add dynamic devices. Make sure to add following line to /etc/mdev.conf ttyUSB[0-9]* root:tty 666 Possibly the root entry should be replaced by whatever the login name of the administrative account is on that system (admin). NI Linux Realtime 2014 (and supposedly newer) This uses the newer udev system to add dynamic devices. Create a text file with the name ttyUSB.rules with following contents: KERNEL=="ttyUSB[0-9]*", GROUP="tty", MODE="666" Add this file into this directory: /etc/udev/rules.d
  8. That is not quite true. LabVIEW for Windows 32 bit does indeed packed data structs. That is because when LabVIEW for Windows 3.1 was released, there were people wanting to run LabVIEW on computers with 4MB of memory , and 8MB of memory was considered a real workstation. Memory padding could make the difference between letting an application run in the limited memory available or crash! When releasing LabVIEW for Windows 95/NT memory was slightly more abundant but for compatibility reasons the packing of data structures was retained. No such thing happened for LabVIEW for Windows 64 bit and all the other LabVIEW versions such as Mac OSX and Linux 64 bit. LabVIEW on these platforms uses the default padding for these platforms, which is usually 8 byte or the elements own datasize, whatever is smaller. The correct thing to use for byte packed data is The following will reset the packing to the default setting, either the compiler default or whatever was given to the compiler as parameter (respectively what the project settings contain). #pragma pack() It sure is and I think trying to create this structure in LabVIEW may seem easier but is in fact a big pitta. I personally would simply create a byte array with the right size (plus some safety padding at the end and then create a VI to parse the information from the byte array after the DLL call. And if there would be more in terms of complicated data parameters for this DLL even create a wrapper DLL that translates between the C datatypes and more LabVIEW friendly datatypes.
  9. These are the VIs that you normally get when installing the IMAQdx drivers. You do want to install the IMAQ Vision Module too, AFTER installing any new LabVIEW version. But yes there is a change that the Vision 2016 installer doesn't know about the LabVIEW 2017 location. If you have a previous version of LabVIEW installed alongside on the same machine, you can try to copy the IMAQ Vision directory from the previous LabVIEW version over to your LabVIEW 2017 directory after you installed Vision 2016. But you should probably talk to NI about updating the Vision 2016 to Vision 2017 too as that will make things easier (and there might be a license issue anyhow if you try to run the Vision 2016 VIs in LabVIEW 2017 without a properly updated NI Vision license).
  10. Even if he had, he is factually still right. The differences are small in the last few years. That is to say that there haven't really been ground breaking new features in a new release in quite some time. Personally I'm quite fine with that as I rather have a stable development environment than a new UX compatible UI widget set that will be obsoleted by the next Google developer conference or Microsoft cheerleader party again . That said we still do not start new projects on a non-S1 version of LabVIEW. Everybody is allowed to install the latest version, but what version is used for a particular project is defined by the project leader at the beginning of the project and that is not going to be a non service pack version. Since development of projects is typically always a multiple people task nowadays there is also no room left for someone to just go with whatever version of LabVIEW he prefers. Version changes during a project principally don't happen, with a few very rare cases for longer running projects when a new version of LabVIEW or its drivers supports a specific functionality much better or fixes a fundamental bug. Not even for bugs that can be worked around will we go and upgrade to a new version during a project. The reason for this is obvious. A LabVIEW program as we write them is not just LabVIEW alone. It consists of all kinds of software components, NI and MS written, third party and in house developments. Any change of a component in this mix can have far reaching consequences that don't always have to be visible right away. For in house developed software we can quickly debug the issue and make a fix if necessary, so there the risk is contained, but for external software it is much harder. It often feels like a black hole when reporting bugs. It's almost impossible to even track down if and when a bug was fixed. This is both for NI and other external software similar, but considering the tight contact with NI it feels like being used. The whole bug reporting feels very like a one sided communication even if I follow threads on the fora. For problems reported there, if there is at all a reaction from some blue bird, it often is a rather standard reaction where the poster is thanked for reporting a problem and then a number of standard questions about version numbers and involved software, which sometimes has no relevance to the problem at hand and sometimes could even be inferred from the first post already if read properly. This sometimes goes on for a few more posts like this and then the thread dies, without any further final resolving post by any blue bird. It may have been solved offline but the reader on the forum doesn't see this. It looks like a back and forth of a more or less related or unrelated question and answer conversation and then it vaporizes like a tiny water stream in the desert. In addition I'm myself not a proponent of installing always the latest and greatest software version if not really necessary. And developing also tools and libraries for reuse it is again not an option to only develop them for the last version. So even if a bug gets fixed I might not profit from that immediately but due to the feeling of being so disconnected anyways it doesn't even feel like it is getting fixed ever.
  11. Notepad++ https://notepad-plus-plus.org/ is similar to UltraEdit in functionality and has a plugin "Hex-Editor" that you can install, that allows you to view a file in hex display format.
  12. Yes you need to install the shared library too. If you run VIPM to install the package you should have gotten a prompt somewhere along the install that asked you to allow installation of an additional setup program. This will install the NI Realtime extensions for the LVZIP library. After that you need to go into NI MAX and go to your target and select to install additional software. Deselect the option to only show recommended modules and then there should be a module for the OpenG ZIP library. Install it and the shared library should be on your controller.
  13. Well, the base was pretty solid and worked for most of the things. I personally find it impressive that it wasn't really necessary. That said the control editor certainly could have benefited from some love in the mean time. It works but always was a bit of a difficult thing to use for actual working on controls, and you can easily end up with a control so messed up and impossible to grab the individual parts in the way you want, that it's often easier to start from scratch again. And I have it from the original developer who worked on the control editor, that the control editor wasn't actually a piece of software they still felt proud off and that in order to redesign it, it would need to be completely rewritten as modifications of the existing code base would be basically unmaintainable. You can be judging on that but you have to consider the times back then. Experiments to compile some of the LabVIEW code base with C++ were a disaster as the C++ compilers at that time produced massively exploded code with sometimes pretty bad performance, so the entire code of LabVIEW was written in standard C. C compilers were struggling to compile the huge LabVIEW code base (NI had to beg Apple repeatedly to extend the internal tables of the Apple C Compiler to be able to handle the number of symbols the LabVIEW code required). NI could have waited a few years with LabVIEW hoping that C compilers got better but that wasn't really a good option so they had to work with what was available. And then as C compiler tools got better there were many other areas that required attention to implement new features rather than redesign existing features that basically worked.
  14. Disable the live drag feature as is mentioned in the thread from Neil. This will get rid of the potentially long and jerky refresh and according bad drag drop behaviour that you can get in larger projects and VIs with this feature. Also another question: What happens if you don't pass back the DVR reference into the class wire? I generally don't do that at all but simply use an unbundle to get the DVR from class private data and then an IPE structure to dereference the DVR, without putting the DVR reference back into the class after. I agree that this should still work, but I have a hunch that this particular use case might have been overlooked during testing of the read only flag for DVRs.
  15. The control editor? No, that is in LabVIEW since at least version 2.x and at that time they didn't even have the option to call VIs as integral dialog of the IDE. These dialogs itself are in fact using the same resource format as a LabVIEW front panel, but the code handling them is all written in C and part of the LabVIEW executable.
  16. I figured as much as I did try on a myRIO and also on a normal Linux x86 system and couldn't reproduce the problem. I do have currently a 9035 available for another project (which has higher priority) and will try it on that one as well very soon.
  17. I'm not sure it is the same as in LabVIEW 2016 but they changed quite a bit about how the moving (and ctrl-dragging) of objects works. When at first confronted with this it felt smooth and actually quite impressive. That is until I had to work on a larger project in LabVIEW 2016. Suddenly everything started to get quite jerky and I ended up frequently to involuntarily dropping things somewhere on the diagram where I never had intended to do it, simply because somewhere along the jerkyness LabVIEW decided that I had released the mouse button or something. To me it feels like a nice idea that got executed based on a limited test but not quite tested on real world professional sized projects. Or every developer at NI uses only the highest end super powered machines with CPUs that normal money can't buy you .
  18. Your sentiments you posted in your first unedited post are fully valid. NI has gone from a small technically driven company to a middle sized bean counter controlled company. It can feel rubbish at times. But I'm not sure you or I will change that in any way. Their customer support is still above average in many cases but if the problem gets difficult you can run into a brick wall sometimes and the people in support are not allowed to leave the predefined channels even if they don't work. But without a proper reproducing case that also shows the symptom on a system that NI can test on, there is nothing anyone could do about this. The PSE in question will shoot down any CAR without at least a clear description how to reliably reproduce the problem. Unless you can show NI in a convincing way that you will buy for a few millions of extra hardware if it works .
  19. Was that on an NI RIO hardware or a standard PC used as a RT system? If latter then there is still a very realistic chance that it is due to some specific hardware version of chips in your system. They tend to have all sorts of bugs that can negatively affect software. Systems like Windows contain a lot of pretty involved software hoops in the hardware drivers to work around such bugs. Even Pharlap has that but to a much lesser degree since they are not used as much and not on as many more or less mainstream hardware systems build with all kinds of components from sometimes rather obscure sources. PCI bridge implementations are very well known to not always follow the official standard to the letter and even those standards are regularly revised and improved to better support certain advanced modi. Even if it is NI hardware this is still true, but then NI should be with some effort able to reproduce it. From the sound of your error description it looks like a race condition somewhere in the kernel that can under very specific circumstances cause a mutual exclusion lock somehow.
  20. But it's logical. Without a reproducing case there is nothing you can fix realistically. Sure you could send someone to check all umptien million lines of source code in LabVIEW, DAQmx, NI-488.2 and low level drivers like NI-KAL etc, but that is a task no person could fill in in an entire life time and still missing hundreds of potential gotchas. Unless it's reproducible it is not a bug, at most it could be your imagination or a cosmic ray causing it. And from my own experience as AE more than 20 years ago, and from application development for customers, if I can't reproduce it, there is a serious chance that the problem is indeed at the other end of the line, and not in the software I am supposed to support, no matter how angry a customer is about the piece of sh*t software he presumably bought. And if you have spent hours supporting such a customer and finally after many hours realize what error on his side caused it, it can be still a challenge to break those news to him in a sensible way. The people who tend to get most angry are often not very good in admitting their own faults.
  21. Just as infinitenothing has mentioned, unless you use the Run VI or the Start Asynchronous Call VI server method, it is highly unlikely that you run an independent VI hierarchy in your application that could cause the refnums to be autocleaned.
  22. What a FUD! SubVersion has never been licensed under the GPL license. It was developed by CollabNet and distributed under a fairly liberal license. Later it got transferred to the Apache Software Foundation which distributes software under the Apache License which is also NOT comparable to GPL. Even then the GPL does not cover the license of code maintained with a GPL licensed tool, but only code you would link in any way with that tool. And in the case of the Linux kernel there is even an explicit exception that applications running on Linux and therefore technically linking to the kernel in some way (they need to do kernel system calls for just about everything that interacts with the system) do not fall under the GPL unless the application developer elects to use the GPL for his software. As to VIPM it is under a commercial license from JKI. The OpenG libraries that are also used inside VIPM as well as being installed through VIPM are all under a BSD license, except the shared library parts that I wrote, which I left under LGPL. Technically this has no influence on any application developed with such OpenG libraries. The VI parts are BSD licensed and allow you to do almost anything with it except claiming you wrote them yourself and they require you to somewhere put a copyright notice that your application uses libraries from OpenG. The LGPL licensed shared libraries in those tools don't taint your application either since they are dynamically linked as a library and since LGPL explicitly exempts any software that uses such a library in such a way from any obligation to be open sourced itself, you are fully safe there. The main limitation the LGPL license has on those libraries is that you can't grab the C source code for them and create your own shared library from it and not open source it under the LGPL (or GPL) yourself. I feel this is a fair limitation. If someone takes that code and improves it in any way I want a chance that this improvement is returned to the community. If you use Tortoise SVN then yes that is distributed under the GPL but even then claiming that since your source files pass through Tortoise SVN somehow they are suddenly also GPL licensed is a total bullshit. It's analogous to claiming that any car driving through your private road automatically is owned by you from that point on. You may forbid other cars to drive on that road and get a ruling from a judge that anyone still driving there without your consent can get a fine but you don't automatically own them. Actually I think it is even more analogous to anyone driving on the public road in front of your house being suddenly liable to you for the mere fact of driving there!
  23. You should check the link to the NI site. It only shows a Page Not Found error.
  24. Your assumption is not fully safe. File IO is protected and if your application can happen to access the same file from multiple locations asynchronously you do have to employ some protection. Most of the times I have actually LV 2 style globals that manage a certain subset of ini settings for an application. Each of them has a initialize method, a get and set method and a save method and if there is a change that some information needs to be modified asynchronously a specific modify method for such information. The initialize method does read the configuration into the uninitialized shift register and is typically executed during program initialization, the save method saves the content of the shift register to the INI file section that it handles and is sometimes called at the end of the application and whenever the user changed a setting in the configuration. For program runtime variables that you want to save somehow to some persistent file storage you should do something similar but you might need to do also some kind of resource locking if you intend to read the value do something with it and then save it back. The reading of the LV 2 style global does not require to actually read in the value from disk as it is already stored but you may have to distinguish between just reading for reference whatever is the current value in which case you simply read the LV 2 style variable and reading it in order to modify it and write it back in which case you may need to acquire also a semaphore that is then released after you have written the value back.
  25. You're doing several errors here. The main error is that you access a global resource concurrently without any protection of it. The File IO VIs have a built in protection with the deny and access rights but since you do no error handling you simply miss it completely that something went wrong. Basically without error cluster fed through the VIs you are totally bound to fail without good indication. There is nothing fundamentally wrong with the File IO functions but the way you try to use them simply can't work. A file is just as a global variable a global resource and needs to be treated like one. If you don't watch out you create race conditions and in this case since the File IO functions do some resource locking to prevent the worst to happen also errors that you don't handle at all. Both the Open Configuration File and the Close Configuration File can fail if the file is open at that moment by another function. If the Open Configuration File function fails you have to retry opening it until it succeeds before you can read any data. If the Close Configuration File function fails with the write if modified set to True you have to equally retry it, otherwise the data you modified in the INI file data is not written back to the file on disk. Again, only the Open Configuration File and Close Configuration File actually access the file on disk at all. Everything else is done in memory. Open Configuration File opens the file for reading and will fail if the file is currently open for writing by a Close Configuration File function that needs to save some data back to the file. And Close Configuration File will open the file for read/write which will fail if anyone has that file already opened no matter if it is just for reading or also read/write access.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.