Jump to content

pete_dunham

Members
  • Posts

    56
  • Joined

  • Last visited

Everything posted by pete_dunham

  1. Thanks for the replies. I am going to look into VI Package Manager and see how it fits into my system; maybe this will help solve my Perforce dilemma. Yes, maybe this wasn't clear in my first post. While I can view my C: drive in Perforce, the only folder that are held under SCC (by Perforce) are folders that I have chosen to be under control.
  2. I am starting up LabVIEW development at a client's site that uses Perforce SCC. I have been debating on how to setup the workspace for a development machine and finally decided on having my workspace be the entire C:\. This seemed to be the only solution to include <instr> and <vi> directory dependencies in the SCC. I am struggling to understand how to include all my dependencies without moving files in and out of a smaller-scope workspace. My gut feeling is that having my C:\ as a workspace will come back to bite me. Has anyone else done this? I have used SVN in the past, so my hesitation comes from my SVN experience and now using a new SCC (Perforce). Thanks! Pete
  3. Are you using Legacy DAQ somewhere else in your code? I had code that mixed DAQ and DAQmx and saw a similar behavior to what you are seeing. NI confirmed this was caused by Legacy DAQ.
  4. is TortoiseSVNing my labview code...and it feels gooooooddddddddd

  5. Yes, I was just thinking about how sweet this was (post normal LabVIEW hours).
  6. I have some code that does just this. You need to "send in" the Field names as created in your table (ADO doesn't like spaces). I usually use this subVI to add in one field at a time to see if I can write values. Remember, that you have to use a different method to add/replace fields to a row thats already created (this is an UPDATE action). I have attached the code. LAVA woudln't let me upload sample database but included a screenshot. The created database in screenshot worked with the subVI included. Good luck and let me know if I can help. -Pete Write Row Into Database example.vi
  7. just botched an OS install while Rangers botch 8th inning pitching to Yankees

  8. WOW. It took me a long time to figure this out. Thanks for all the replies. Each one of them helped me work through this. It turns out that I DID need to upgrade to 8.5 to be able to use the In-Place structure. However, I was forced to make some significant redesigns in order to use the In-Place structures effectively. While this was a significant effort it made senses to keep a consistent architectural design to act on my "object". In the code that was causing me problems I had created subVIs to act on individual parameters of one of my "objects" in a loop. This did indeed cause LabVIEW to make memory allocations (this was brought up in response to my 2nd code post). I think this happened because I was sending both the most top-level cluster along with a unbundled array of objects through the same loop (then rebundling). Now that I have figured out viewing memory allocations I think will be using it in a lot of my future designs. THANKS again for all the input. I would be happy to discuss this problem/solution or similar problems if anyone comes across something similar. -Pete Kudos.
  9. Hmm. I was thinking about that. Any thoughts on saving objects as binary text? And then opening up and acting on data and then resaving as binary? Does anyone have ideas about how this would perform? My framework is really just collecting and acting on serial data so I may have some performance time to spare. My original architecture was built with the thought of saving each UUT object's data somewhere. I was hoping to implement the ability to pull up any tested UUT's object data (maybe with a custom viewer) to see all the data that was collected during the test. This seemed easier than creating a custom database and spending time converting each LV piece of data to a corresponding field value. -Pete
  10. I saw this error today when i was trying to save to a database on a network, but my PC login/user didn't not have access to the network address. There maybe other reasons for this error, but I would think it has something to do with an incorrect path location (either save or open)
  11. Ben, Been reading over this link. A lot of good info. Thanks. ---- This explains what is happening to me: http://digital.ni.co...62571B5006B46E1 with problems described above(i think)
  12. Yes, I am still stuck on this. An upgraded PC and upgraded LabVIEW version 8.5 hasn't helped. LabVIEW just alocates up more memory before prompting with error message --> error message in 8.5.1 versus actual crash in 8.2.1. But I am struggling about where and how to implement the in-place structure to fix this. Also, I am not sure how to benefit from the Show Buffer Allocations. If this causes LabVIEW into copying data I am ; because it seems like this is an intuitive way to implement testing "objects" Without sending the complete <main data> cluster, along with a the individual <object>cluster elements (in certain subVIs), and the individual <object> array nested within the <main data> cluster -- there doesn't seem to be a way to keep my design neat. By neat I mean sending one cluster wire <main data> to my most top-level subVIs. Another, better, design approach doesn't jump out at me. I felt like I built this design on using LabVIEW "best practices" and programming technique, but clearly, I must still be missing something. Is the code presented a clearly depict a memory problem? Is it possible I am looking at the wrong section that is causing my program to crash? Other parts of my code use several VI server references which I know isn't best practice or desired. I checked to make sure I am closing all these references, but could this be an issue? Or is the consensus is my nested object artichitecture is killing this program? Thanks for everyone's input and insight. This problem is frustrating, but has been a great learning experience so far. -pete
  13. AND... Per these replies--> it appears that LV 8.5+ (and higher) addressed this issue with the in-place structure. Now I need to convince my employer to leave 2007 (8.2.1) behind!
  14. Ben, Thanks for the reply. The original .png of the code was a sample VI that I made to simplify the question. But actually the subVI in question is three case statements deep (inserted below) Do you think that is the issue? I am not sure how to change my code for fewer buffer allocations with out some major re-design (which may need to happen, anyway) I should note to, per original post, that the PC this code runs on has 512MB of RAM. I plan to test the same code with more RAM installed. Will update if this fixes my memory problem. I hate to use a hardware forgive a design flaw (if that is what I have). But, it appears it is a balancing act --> paying some memory overhead for a scalable design; as I had a working program that was much harder to modify/understand (but didn't crash because of memory issues)
  15. Felix, Thanks for the recommendation. Because of how my program logic operates, I believe I know what code is running when LV gets stuck. Basically it is one (sub-)state machine that is run pretty continuously (gathering serial data). Other subVIs are run only periodically (change mux channel, increment counter, etc). What is strange is that there seems to be a tipping point in this code. Memory slows increases (not to concerned about this), but at one point in my program execution the CPU memory will almost double or triple and cause LV to stop responding. However, I am not changing what code I am executing at this point (that is, the subVI running during the crash has run many times before without this memory expansion).
  16. Thanks for the reply. I just researched that today. The development system that this project is tied to is stuck at LV 8.2.1. Am I correct in thinking that this structure isn't available for 8.2.1?
  17. Hoping to get some insight from others; LAVA-ers have repeatedly saved the day for me. BACKGROUND: I spent time switching over a working program to a new architecture as it was becoming difficult to manage its scalability in its current state. However, with my new, more scalable architecture I must be missing a fundamental LV programming philosophy --> LabVIEW memory keeps increasing until it crashes (memory increasing and then CPU usage to ~100%). I have used the Tools -> Profile ->Performance and Memory and Tools -> Profile ->Show Buffer Allocations. I can see the VIs that are taking up more and more memory but am stumped on how to fix this problem. There was significant effort in my first redesign, and I was hoping I don't have to rework this architecture. To get right to the point, there are 2 basic designs that I thought were "clever" but must not be. The first is my basic SubVI structure. I created an array of cluster data where each array element is an object for each UUT I am testing (running parallel UUTs). I am using FOR loops to read/write the data of each "object". Note: I am not using LV's OOP features, just calling this cluster an object since it ties to a UUT. 1)Basic SubVI. 2) The example subVI above would be called in a "substate machine" <below>. I used state machines with functional globals, so that after each substate is driven the subVI is left (to access the main State Machine) and then comes back to the sub-state that is hanging out as a functional global. I am passing a lot of data (multiple clusters with arrays). I am guessing I have designed something that is fundamentally flawed. Can someone break my heart and tell me why. ***Right before posting this I noticed that the "sub-state machine" main data cluster doesn't need a shift-register terminal since this data is being passed in and out each time this subVI is run. Does this have an impact on memory?*** Thanks!
  18. is having my first ever major memory leak problem

    1. Cat

      Cat

      And you say you've been using LV since 2002?? You are *way* overdue!!

  19. I am horrified!!! I highlighted the execution to debug and saw dataflow being ignored in my program. I have 3 booleans to--> build array to---> OR Array Elements. Two values are getting passed as false and then OR Array Elements passes false. The third boolean (True) never gets to build array (highlight execution shows 3 elements in array). I am in the twilight zone. I should note that this is an older version of LabVIEW 8.2.
  20. On my current team past LV programmers have always left abort programs in the test program Team members in the lab who use and evaluate our GUIs and programs are on my back about hiding the abort button and having a proper STOP button. (Some people don't like change). This is more a fun argument, because I am the one controlling this feature...but I want to officially end this spar. I thought fellow LAVAiers (???) could do the heavily lifting for me and bring peace back to the LV world. Without getting into the nitty gritty of using development setups as tester setups (the VIs never got pushed to exes), I am looking for an arsenal of reasons to end this argument. (1)We are using high voltage power supplies... so SAFETY has been my number one argument. I also found from ni.com: Provide a stop button if necessary. Do not use the Abort button to stop a VI. Hide the Abort button. Details The Abort Execution button stops the VI immediately, before the VI finishes the current iteration. Aborting a VI that uses external resources, such as external hardware, might leave the resources in an unknown state by not resetting or releasing them properly.
  21. test operators are giving me a hard time for hiding abort button and having a proper STOP button in program. Looking for a summary of why I am right!

  22. Does anyone know if National Instruments is still updating/supporting BlackFin Embedded Module software? From the NI site it is not listed under LabVIEW for Embedded Applications and doesn't appear to be for sale anymore. Thanks.
  23. installing Windows 7 on MBP has been a nightmare

    1. Michael Aivaliotis

      Michael Aivaliotis

      7 on a Mac? Virtual machine?

    2. pete_dunham

      pete_dunham

      No, running it "natively" using rEFIt. I also set it up to boot into XP since I wasn't sure about Windows 7 (but seems solid so far).

  24. If your digital DIO is driving the change in your indicator you might want to look into "Change Detection" if this is available for your DIO. http://decibel.ni.co...t/docs/DOC-2280 If this makes sense for your application you can use Change Detection with a User Event to limit polling of the processor. This user event could include the code you currently have in your event, plus the change of the boolean indicator. This way the change of the DIO line is driving the event and boolean indicator instead of vice versa. When I used the example above with at 6533, I did notice that the user Event was generated when calling the code; for some reason. There was an easy work around since after this first "misfired" user event the next Change Detection events were successfully triggered.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.