Jump to content

LogMAN

Members
  • Posts

    655
  • Joined

  • Last visited

  • Days Won

    70

Everything posted by LogMAN

  1. Me too, but people don't like when I down-vote them, so I do it ~stealthy~ Damn you logic! You are in my way! You are right of course, I can't argue against that. Now where is my (:brokenheart: Dislike this) button?
  2. Figured. Funny thing is I actually wrote a couple of sentences about that, but deleted them in the end This is what I dislike about the voting system (it's an extreme case, but a case nonetheless): http://meta.stackexchange.com/questions/228358/why-my-questions-on-stackoverflow-are-getting-downvotes-without-explanation The system can only work if used the intended way. By the way, is it possible to disable the "voting-down" button and leave the "voting-up" button active?
  3. Isn't that more or less what our good old (♥ Like this) button is for? It serves its purpose just fine - even though we cannot filter for that right now (*hint*)... In my opinion the best answer is the one that answered the question of the one who asked and not the one everyone else votes as the best answer. There might even be an answer that is very well written, explains an important aspect or is especially funny and ends up as the top answer without even answering the initial question. The possibility to vote someones answer down on the other hand is misunderstood very quickly and will alienate people who fear "bad scores". It also hinders dialog as people just need a single mouse-click in order to judge someones answer. So how about posting the initial question in the General forum and let the voting score decide? Where is the voting system when you need it?
  4. Amen. Thanks Neil I was attempting to write the exact thing right now (though I had trouble finding the right words) One thing I want to mention related to the graphs from above: Maybe there are less topics started overall, however the existing ones are extemly helpful as is and being continued on a regular basis (even ones that are over 10 years old). This forum provides a tremendous amount of knowledge, ideas and funny things which lure me to come back every day (sometimes even on holidays ). Also: Don't trust statistics you didn't forge yourself
  5. I too had some issues a couple of hours ago (see attached picture), but it's better now.
  6. Have you checked your settings here?: https://lavag.org/notifications/options/ email notifications were disabled for me, maybe it's the same for you too.
  7. Not sure if this helps, but hooovahh secretly explained the XNode editor on YouTube (I recommend watching the entire video btw ): It is closely followed by a quick introduction to the Variant Repository https://www.youtube.com/watch?v=R2En7yMANi8&feature=youtu.be&t=34m44s Wow, new Lava embeds videos now
  8. The Articles area looks very broken to me (see attached picture): https://lavag.org/index.html/ Any other page is fine though.
  9. I think you have no choice but to try and re-produce all steps your user has taken to get to that kind of issue (you could connect via remote desktop and let the user show you). With the little information we have there are at least two things I would check: 1) Check if the correct RunTime-Engine has been installed. 2) Check if all necessary files of your application are present. (The dialog clearly states two are missing) There are two types of RunTime-Engines available for LV2012, minimum and standard. Standard is the one your user needs. I'm not sure what happens if you install the minimum one, but it's worth mentioning. If your application consists of separate folders with VIs next to your executable, some of them might be missing. The same goes with external plug-ins and such.
  10. The way your VI is implemented is the correct way to dispose of objects (or any given non-reference type in LabVIEW). The wire has "loose" ends (after the last VI), so that particular copy of the object is removed from memory automatically. Only reference-types must be closed explicitly (like the DVR). So your final VI just has to take care of references within that object to prevent reference leaks.
  11. Okay, this is getting a bit off-topic as the discussion is about a specific problem which is not necessarily sqlite related. So I guess this should be moved to a separate thread. drjdpowell alredy mentioned, that sqlite is not the best solution if your data is not structured. TDMS on the other hand is for use with graph data, but creates index files in the process and stores data in a format readable to other applications (like Excel). That is what slows down your writing/reading speed. As far as I understand you want to store an exact copy of what you have in memory to disk in order to retrieve it at a later time. The most efficient way to do that are binary files. Binary files have no overhead. They don't index your data as TDMS files do, and they don't allow you to filter for specific items like an (sqlite) database. In fact the write/read speed is only limited by your hard drive, a limit that cannot be overcome. It works with any datatype and is similar to a BLOB. The only thing to keep in mind is, that binary files are useless if you don't know the exact datatype (same as BLOBs). But I guess for your project that is not an issue (you can always build a converter program if necessary). So I created a little test VI to show the performance of binary files: This VI creates a file of 400MB on the users desktop. It takes about 2 seconds to write all data to disk and 250ms to read it back into memory. Now if I reduce the file size to 4MB it takes 12ms to write and 2ms to read. Notice that the VI takes more time if the file already exists on disk (as it has to be deleted first). Also notice: I'm working with an SSD, so good old HDDs will obviously take more time.
  12. @drjdpowell: I installed the latest version of your library (1.6.2). There is a new VI called "SQLite Database Path" which is missing the output terminal for "Last INSERT RowID":
  13. I guess you mean that the other way around Thank you for the great links, I didn't know there was such a detailed list of supported versions specifically for Windows 10. This will save much time and effort for anyone planning to take that step. Also one would think they would keep the shop up-to-date in order to sell the product...
  14. Welcome to LabVIEW and welcome to the forums, hope you enjoy your stay LabVIEW does not yet officially support Windows 10, see LabVIEW Operating System Support. That being said I've successfully installed LV2011 and LV2015 (x64) on Windows 10 and had no problems with the installer (don't know about LV2014 though). As far as I can tell from your screenshots, LabVIEW is already installed on your system (does not require any more disk space). Are you sure that's not the case? Did you install LabVIEW before upgrading to Windows 10? => Check if there is a folder "C:\Program Files\National Instruments" Let me already give you answers to two possible scenarios: If you can find the "National Instruments" folder under "Program Files" you have most likely upgraded from a previous Windows version without resetting your computer. That might work for many applications, however in my experience this causes more trouble than resetting your computer and installing everything from scratch. Check the Recovery options in Windows 10 to learn how to do that (use the 'Remove everything') option and make sure to create backups of your important files first. This will of course require you to install and configure all of your applications again (including all settings you've don in Windows)! Don't do that if you don't know how! It might be that LV2014 specifically has issues on Windows 10. Your license key will work with different versions of LabVIEW however (should work with every version from 8.0 upwards). Try downloading the latest version from their site (search over the web, or go to their FTP servers). Try installing LabVIEW 2015 (x64). Here is the direct link: ftp://ftp.ni.com/evaluation/labview/ekit/other/downloader/2015LV-64WinEng.exe Hope this works for you.
  15. This is only true for the loop without the Event structure. The Event structure will actually wait until either an event or the Timeout occurs. By default the Timeout is -1 causing the Event structure to wait indefinitely thus requiring zero CPU. You can change that by wiring a Timeout to the Event structure which allows you to execute code in the Timeout case on a regular basis. The loop without the Event structure on the other hand will always loop, even though as you correctly stated, the loop can be slowed down using the Wait for ms or Wait for next ms functions. Maybe the following VI snippet helps understanding what ScottJordan already explained (this is based on your VI, however I added another Click button to trigger the Value (Sgnl) property): Press the Click (no Event) button to update the value of Numeric (Test will not increase) Press the Click (Event) button to trigger the Event (Test will increase) Find attached the VI in LV2013 (snippet is for LV2015) Value vs. Value (Sgnl) LV2013.vi
  16. I haven't seen such a tool around. However you should give the LabVIEW Link Browser a try. It's a tool to visualize the dependencies between all Sub-VIs of a selected VI. I used it a couple of month ago to demonstrate the differences between those linking issues, so maybe it works for you too. Take a look into this post (scroll down to the picture; don't forget to read the post too ): https://lavag.org/topic/18654-should-i-abandon-lvlib-libraries/?p=112165 Here is an online example: http://resources.chrislarson.me/cla/#/ It's actually an open-source project on GitHub: https://github.com/wirebirdlabs/links, so just clone it and play around a bit. Maybe someone knows a way to make it visualize libraries only.
  17. We just moved from LV2011 to LV2015 and basically had the same issues even though we don't have a separate build PC. We solved this by using a virtual machine for LV2011 and working with LV2015 on the host. So no need for a second hardware. Maybe that works for you too.
  18. There is to my knowledge no way to retain the palettes when changing the source folder. It's the way VIPM has been designed. However there are ways to get what you want: If you always place the VIPB file in the root folder of the sources, all files are linked with relative paths, so you only have to copy the VIPB file to the new root folder and change it where necessary. As the source path is relative the palettes will persist - or rather you don't have to change the source folder in the first place. Another way is to manually edit the VIPB file. It's basically an XML file and quite easy to read. Search for "Library_Source_Folder" and insert the new path. Or if you just want to replicate the palettes copy the entire "Palette_Sets" subtree.
  19. Absolutely right, however you can disable that behavior for a calling thread. To do so just call Wow64DisableWow64FSRedirection and Wow64RevertWow64FsRedirection when you are done. Make sure to call both methods in the same thread (I have used the UI thread in the past)! In between those calls you can access the System32 directory normally. Very important: All calls to the System32 directory must be executed in the same thread as the DLL calls! EDIT: You might be interested in reading this explanation to the File System Redirector: File System Redirector
  20. You are right! I've just tried it. Initializing an array of Boolean vs. array of U8 allocates the exact same amount of memory. I did not know that! I honestly thought LabVIEW would auto-magically merge bits into bytes... Reality strikes again So I revoke my last statement and state the opposite: The primitive solution won't work on multiple Bits simultaneously. We could do this ourselves by manually joining Boolean to Bytes and using the primitives on Bytes instead of Boolean. Not sure if there is anything gained from it (in terms of computational performance) and I think it's not subject to this topic, so I leave it at that.
  21. Try to estimate the amount of CPU cycles involved. Your code has two issues: 1) The bits must be put together to form an array (they originate from three different locations in memory). 2) You have an array of 3 bits, so that makes 1 Byte (least amount of memory that can be allocated). The numeric representation is a U32 (4 Bytes), so it won't fit without additional work. The System actually has to allocate and initialize additional memory. After that there is a case-structure with n-cases. The CPU has to compare each case one-by-one until it finds a match. Now the primitive solution: First the CPU gets a copy of one bit from each the first and second array and performs the OR operation (1 cycle). The result is kept in the CPU cache and used for the following AND operation (another 1 cycle). No additional memory is required. The primitive solution could actually work on 32 or even 64 Bits simultaneously (depending on the bitness of your machine and the CPU instructions used), where the numeric solution must be done one-by-one. Hope this makes things a bit more clear.
  22. I get very similar results to the ones CraigC has shown. The numeric conversion really is no good option here, neither for performance, nor for memory. In order to find an answer to the initial question I have performed my own tests based on your benchmarks before. My benchmark includes several cases with different ways of solving the same issue. Each case will use the same values and the VI runs in a loop to get some long-term timings. The AND condition can either be constant TRUE, FALSE or random values. The last case will perform the same operation as all others before, but this time without a for-loop, so basically this is totally LabVIEW optimized. For readability a chart shows the timings in comparison. Oh yeah and since we use for-loops there are also cases where the array is initialized beforehand and values are replaced each iteration to prevent memory allocations (who knows, might safe time ). Here is the snippet (running in LV2015, attached VI is saved for LV2011 though): Please tell if you find anything wrong with how it is implemented. Benchmarking is no easy task unfortunately. Now here are some results based on that VI. First with the AND being connected to RANDOM values: Next the AND is connected to TRUE values: And finally AND connected to FALSE: So what do we get from that? Basically initializing a Boolean array and replacing values in a for-loop did not work very well (all the '(no allocation)' times). My guess is that the compiler optimization is just better than me, since the number of iterations is pre-determined LabVIEW can optimize the code way better. All timings with shortcut are worse than the ones without. So using a shortcut does not improve performance, in fact its just the opposite. Results might change if the Boolean operations are way more complex, however in this case LabVIEW is doing its job way better than we do with our 'shortcut'. The last case without for-loop ('Fastest solution') gives a clear answer. LabVIEW will optimize the code to work on such an amount of data. My guess is that the operations also use multiple threads inside, so maybe we get different results with parallel for-loops enabled in the other cases. And most likely much more stuff. The most interesting point is the comparison of timings between the 'Fastest solution'. As you can see there is no real difference between any of the three cases. This could either mean my benchmark is wrong, or LabVIEW has no shortcut optimizations like C. What do you think? Shortcut Benchmark LV2011.vi
  23. This works fine on my computer (tried it at least 10 times). Do you have the same issue on a clean project? EDIT: Could you upload an example which fails on your computer?
  24. Have you considered VPN tunnels? If you setup the cRIO/PXI to only accept communications from the local address space (which I don't know if it is possible), you can become part of the local address space only by connecting via VPN (or if the hacker is a survival trained helicopter pilot with faked papers / employee working on the rig...). Of course the VPN tunnel is the critical part here, so it should be setup with great care and use secure dongle/token systems with randomly generated tokens. The connection itself however is secure and any communication is encrypted. EDIT: in theory (I'm no expert in this)
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.