Jump to content

John Lokanis

Members
  • Posts

    797
  • Joined

  • Last visited

  • Days Won

    14

Posts posted by John Lokanis

  1. Yes, this a confirmed bug with NI. I ran into this in 8.2. Not sure if and when they plan to fix it. I told them that the parent should 'inherit' the defer updates setting from the child.

    For now, my solution is to defer both when I want to draw something time consuming, like a tree control update.

    This does lead to some complex logic...

    -John

  2. QUOTE (Daklu @ Mar 20 2008, 02:35 PM)

    If I implement it as variant attributes I'll have to convert it every time I want to do an operation on the tree. How does variant conversion compare to array access in terms of speed?

    Are there any other inherent advantages to using arrays or variants that make one "better" than the other for this?

    The variant tree structure is fast for reading but slow for writting. So, if you build the data tree once and then read from it often, this is a good approach.

    If you do a lot of reading and writing, then I would look at the Map Class mentioned above. I have not tested it against my variant tree but I plan to someday if I ever get some spare time.

    Let us know how you end up solving the problem!

    -John

  3. QUOTE (i2dx @ Mar 15 2008, 01:26 PM)

    What interface does your toolkit use? ADO (ActiveX) or ADO.NET or some other external method to reach ADO databases?

    -John

    QUOTE (jdunham @ Mar 14 2008, 09:59 PM)

    John, We log tons of data to SQL server, sometimes a few dozen statements per second. The main optimization we use is that all statements which don't require a reply are sent in to a LabVIEW queue. Then a separate process flushes the queue once per second, concatentates all of the query statements, and inserts them a single ADO call. This process is not using a significant percentage of the CPU, so we could probably be logging a lot more.

    If you need to return results, then it seems like you are doing more than just logging. Our system also needs results sometimes, and while those calls can't be batched together, we can still call several queries per second.

    Good luck

    I am doing something similar. Each of my test engines (one for each DUT) has a thread that offloads all DB interactions and uses a queue. Calls that need return data pass a single element queue reference along with their SQL call to place the results in. Also, calls that require an answer enqueued at the opposite end so they get processed first. "Insert only" calls just queue up and get processed when there is time.

    I have not tried merging calls together. All my calls are to stored procedures so they all return something (custom error message). I don't think I can merge them. I do detect errors and retry the call 5 times before giving up. I get a lot of timeouts due to the DB being overloaded.

    We can have several testers all logging at once and up to 200 transactions per second hitting the database.

    I also did a little experiment recently to test methods of moving data between .NET and LabVIEW. I was building an ArrayList in .NET from the record set. The potential issue was that adding to an ArrayList allocates new space on each call. This reoccurring malloc could slow down the process. So, I changed it to a fixed 2D array in .NET. The problem here is the DataReader that gets the record set from the server is a forward only reader that cannot know how many rows are being returned. So, there is no way to know how big of an array to create before reading. So, I used a call that I knew would return 200 rows only. I then hardcoded the size of the 2D array to 200 rows. The funny thing was, when testing this the static array method was no faster than the ArrayList method. So, my new guess is the forward only data reader is the bottleneck in the process.

    FWIW, I was getting back a 200 row, 15 field record set in ~200ms. Still, the processing time on the DB to do the fetch and build the record set was < 1 ms. So, there is some significant overhead somewhere in there.

    -John

  4. Thanks for the replys. It looks like both of the VIs mentioned above use COM to access the ADO interface instead of ADO.NET. Do you feel COM (ActiveX) is faster than .NET in LabVIEW?

    One difficulty I see in .NET vs COM is the lack of a native means of moving the recordset from .NET into LabVIEW. The ADO.NET assembly wants to use a Reader to iterate through the results one by one and fetch them. While this is a more portable interface (especially for script based languages that run in a VM, like VBScript or JavaScript), it is very slow in LabVIEW. As a result, I had to implement some tricks learned from Brian Tyler's old LV Blog and some C# code to work around this.

    Also, since both of your implementations use ADO, they must go through an additional layer (OLE DB) to reach the database. This is good if you want code that can use a variety of databases, but if your target is only SQL Server, then the ADO.NET System.Data.SqlClient provider (see http://msdn2.microsoft.com/en-us/library/k...ks0(VS.80).aspx) should be faster.

    Also See http://sqlbible.tar.hu/sqlbible0123.html

    And http://docs.msdnaa.net/ark_new3.0/cd3/cont...rgas%5CCh12.pdf

    And http://msdn2.microsoft.com/en-us/library/ms675532.aspx

    So, I guess ADO via ActiveX and ADO.NET are the leading methods. I have looked at directly calling the DBLIB DLL but that looks a bit more formidable to program.

    I wonder if passing the SQL call outside of LabVIEW to another environment would make more sense?

    -John

  5. I have written a large test application that logs all data live to an SQL server. I am using the System.Data.SQLClient .NET assembly to interact with the database. This is working, but it seems to be a bit slow and I am wondering what the best method is out there. There are lower level drivers between this assembly and the transport layer. Has anyone written VIs to directly access those?

    One thing I did was write a little C#.NET code to move the record set data from .NET to LabVIEW in a single block move instead of an iterative process. This was a huge time saver, but I am wondering if there are more ways to speed things up.

    I realize that LabVIEW is not very speedy when calling .NET code. Would it be worthwhile to try an implementation using COM (ActiveX) instead?

    What about directly calling the lower level DLLs that the .NET code calls?

    Is there another solution entirely that I could try? All I need to be able to do is pass a SQL query to the DB and get back a record set.

    thanks for any help or insights.

    -John

  6. QUOTE (Mads @ Mar 13 2008, 02:43 AM)

    but there is a big no no in the upper left corner of the windows - a window name that ends with .vi.

    And an Icon that is the default LV Icon. If you are going to go to the trouble of doing all that custom graphics, then make a nice App icon (in all the various sizes and color depths) as well.

    That said, those are some nice FPs.

    Personally, my general rule for GUI design is, if another LV dev can tell it was written in LabVIEW, I have failed. For that reason, I study the style guildlines of the target OS before building my GUIs.

  7. QUOTE(Tomi Maila @ Mar 4 2008, 02:22 PM)

    It doesn't support LabVIEW classes, all other types should be supported but ones that contain LabVIEW classes. This is a limitation of the used technique. If you get a broken wire with something that doesn't contain LabVIEW classes, then I've a bug that needs fixing. Can you provide any sample code and steps to reproduce the broken wire?

    It was developed solely within LabVIEW 8.5, no voodoo. And yes it uses XNodes, though.

    See attached zip for an example. You will need to wire the three objects together. When you do, the connection between the second and third objects will be broken. If you save the VI, the wire will change to not broken, but if you try to create an indicator from the value output, it will be a string and not the cluster used to define the datatype.

    This is an extreme example, as you can see by the complexity of the datatype. But, I don't see why this should not work.

    I was under them impression that Xnodes were not available to devs outside of NI after 8.0. I guess you are one of the lucky ones...

  8. QUOTE(Yuri33 @ Mar 5 2008, 01:46 AM)

    (reading a global does not block other reads of the same global like a functional global would).

    Actually, I don't think that is true. I seem to remember reading somewhere that reading a global requires a task switch to the user interface thread, thereby causing parallel accesses to be done serially. Perhaps an NI dev can chime in on this?

    But, you are right, my idea using the feedback node based functional global is not a parallel design. It was not intended to be. If you need parallelism, I recommend single element queues or perhaps Tomi's new toolkit, if it supports parallel access.

    But in the case of an actuall LV global, it still allows write access anywhere, which was what I was trying to prevent.

  9. QUOTE(Tomi Maila @ Mar 3 2008, 11:07 PM)

    Funny that you came with your WORM just now as I also wrote a Write-Once-Read-Many variable called One-Time Store and released it last week as a library that ships with the Active VI Toolkit. Unlike your example One-Time Store must be referred with a valid refnum so it cannot be used as a global directly, although the refnum itself can be stored to any of the globals available. The image below illustrates how the One-Time Store library functions.

    I gave this a try. Cool tool. It does not seem to like complex data types, however. I get a broken wire from the create to the read node. Not sure if my cluster is too complex or is because it is a typedef'd cluster. It seems to work for simple clusters. Is there a known limit to the datatypes it can support?

    I would be interested to know how this was built. I am assuming some sort of xnode voodoo scripting stuff that required 7.1.1 to access...

    -John

  10. That is an interesting solution. What you seem to have built is a pass by reference variable that can store any data. I am curious how this was implemented. I assume that the data is stored as a variant, but how do you cast the variant back to the original data type without needing a type input on the read function?

    Will this allow you to store ANY data type (including complex custom types) or just LV primative data types.

    I guess I need to download it and peek inside...

    -John

  11. QUOTE(tcplomp @ Mar 3 2008, 11:17 AM)

    Looks nice, although the Demo is a little bit counter intuitive.

    Could a similar approach be made with notifiers?

    Ton

    A notifier or a single element queue could work, but these would allow multiple writes. My goal was to make a 'by reference static variable' This would always be initialized by the app in the beginning and then could be used anywhere in the app's VI tree. If you are working with several devs on a project, you could provide this VI as a tool for writting sub-vis that need access to the main app's FP references. The nice thing is that the number of VIs that must have an instance of the type-def'ed cluster of references is kept to a minimum. This makes it easier to maintain as you add more FP controls that others need access to. Since it is write once, you don't need to worry about it being overwritten by accident in a sub-vi.

    Also, I suspect that the feedback node is a very efficient way of storing and accessing this data. The implementation, therefore, is much simpler that using Queues, Notifiers or traditional functional globals, IMHO.

    To Jim's comment, I also use CRUDs, but mine are ACRUDs, Auto Create, Read, Update, Destroy. These use a single element queue and feedback node for storage of the queue ref. They test the queue ref to see if it is valid on each call and if not, create it. They then perform the READ, WRITE or DESTROY operation and output the queue reference back to the feedback node. I like this better than using the first call fuction because I can destroy and recreate several times within an app as needed to control memory usage.

    But again, the point of this tool was a STATIC variable, since FP references do not change while the app is running.

    -John

  12. No, not that kind of WORM!

    A Write Once, Read Many variable.

    Background:

    For large applications with many FP controls that you want to access and modify from sub-vis, a common means of doing this is to build a large cluster of references to all the FP controls and then pass this cluster to all the sub-vis that need access to any of these controls. While this uses the 'value' property to read and write to the FP controls, which is usually frowned upon due to speed issues, in a GUI, this usually not an issue due to the slowness of the human user. The problem with this method is needing to have this large cluster of FP controls on every sub-vi. This can take up a lot of space on the sub-vi panels and can cause problems if you update the cluster with more control refs in the future, causing it to resize and overlap with other FP objects. (obviously, the cluster of refs would be a strict type def)

    A Solution:

    So, how to solve this?

    Well, the need is to set all these references at the beginning of execution and then be able to read them from anywhere in the application. Sounds like a good use for a Global variable. But I don't like global variables. I won't go into all the reasons, but one of the key ones is that they can be written to by anyone anywhere in the application, which would be bad in this case.

    So, what I needed was a write once, read many variable that could be accessed from any VI in the application. It would be initialized only once when the EXE started. Sounds like a job for a Functional global! Well, there is a new form of functional global that is extremely simple using the feedback node.

    See the attached simple example. This uses only one reference. You must ensure that the first call that sets the value completes before and subsequent calls execute.

    Download File:post-2411-1204569204.zip

    What do you think? Does this seem like a good solution or is there a better one out there? Are there any pitfalls of this solution?

    thanks,

    -John

  13. QUOTE(Zalon @ Feb 25 2008, 06:14 AM)

    Hehe, yeah i did that at first, but i don't like the look of the constant in the Block Diagram view :)

    But you are right, it should be a constant, so thats what i'll use.

    If you have a large structure that looks 'bad' as a BD constant, try instead making a sub-vi that has nothing in it except an indicator of your data type. Then when you place this on the BD, it does not take up as much space. I also recommend you name this VI something like "CONSTANT - my data.vi" and you make the datatype a type-def.

    For smaller cluster constants that don't look 'good' on the BD, try changing the orientation by right clicking on the border and changing the to 'arrange by - horizontal'.

    I always make type-defs of and data-type that is not a LV simple type. I also always make a type-def out of all enums. It is a good practice to get into.

  14. From NI Support:

    ----------------------

    I have filed a report (4ILAJ889) with R&D for further investigation. I

    will update again you after the developers look into this issue. I tested

    your application in LabVIEW 8.2.1 and did not see this problem, so I'm

    assuming that something changed between LabVIEW 8.2.1 and LabVIEW 8.5. Are

    you aware of the built-in LabVIEW calendar that you can select from with a

    timestamp control? I'm still looking for a suitable workaround. Is it

    necessary that the calendar pop-up? I'm sorry for the delay and I can

    assure you that we are looking for a solution for you.

    -----------------

  15. Ok, so here is one idea:

    For each type of subsection that may be repeated N times, create a single templte. Then use this to generate several documents, one for each instance of the test. I will then need some way to merge theses docs together into one large document. Has anyone done this before? Does Word expose this functionality in it's COM interface?

    thanks,

    -John

  16. Hi,

    I am trying to build a complex report in Word using the Report Generation Toolkit. I am using bookmarks to insert my data into a template. This is working, but I now need to programmatically expand the report based on the data. For example, if the report is on a set of N tests, then each test needs it’s own section with its own unique set of bookmark names to insert into. So, I need a means of making a subsection template with the bookmark base names and then appending this template to the main template N number of times, while renaming the bookmarks in each appended section by adding a # to the end of them, so I can reference them later when I want to insert the data.

    I have no idea how to do this or if this is even possible. Has anyone else run into this and come up with a solution?

    Thanks for any ideas,

    -John

  17. I have a set of VIs that implement the .NET MonthCalendar form to allow for date entry. I think I have found a bug in how LV registers for .NET events. My VI will work every time on the first call but it fails after 3-4 calls to the sub-vi. The problem seems to be that the .NET control is not registering it's DateSelected event every time, like it should.

    See the attached zip file for an example. Give it a try and let me know if you figure it out or can at least reproduce my bug.

    Download File:post-2411-1203050038.zip

    Thanks,

    -John

  18. There are many things about .NET in LabVIEW that are slow. :( Hopefully NI will put some effort into fixing this some day. But, there are work arounds. If you go look at Brian Tyler's old blog when he was at NI, there are a few good tricks for moving large data sets between .NET and LabVIEW. I have used these tricks to make a SQL interface via .NET that is 10x faster than the NI toolkit. Unfortunatly it does require writing a little C# code (but just a little :P ).

    Maybe if we all leaned on NI to improve their .NET overhead, things would get better.

  19. I read your article. I agree that to the novice, using XML in LabVIEW is not easy. You do need to understand a lot about XML features, like the schema and namespaces. But it is not impossible and can be a very powerful tool for large and complex data driven applications.

    Also, the best way to read and write XML from LabVIEW (in my opinion) is to use the MSXML .NET assemblies. I have tried the XML toolkit from NI as well and find it lacking compared to rolling my own with .NET calls.

    I am actually working on a presentation to our local LV user group about .net and using it to work with XML files. I use XML extensively in my LV applications, mainly as a universal script format to drive data driven portions of my application.

    I have tried many ways of parsing XML within LabVIEW. Given the tree nature of XML, my first method was to use recursion (via VI server) to walk an XML structure and then build a representation in a LabVIEW data structure. The first attempt used an array of clusters with elements to contain the node links and the data at the node. The second version used a Variant tree (via attributes) to construct the tree in the LV data space. This was a more natural representation of the data but was slow to access (due to the overhead of the variant attribute VIs, I suspect). This was before LVOOP. I understand there are better ways to represent tree structure in LV using the new OOP features but I have not explored that yet.

    Each of these methods required you to then translate the data into a LV structure (nested arrays of clusters) to make it usable (and the code readable) in the rest of the app.

    My current implementation is to parse the schema directly into a LV data structure consisting of arrays of clusters of arrays of clusters...etc... This is the fastest method so far and goes directly to a structure that can be used in the rest of your application. The downside of this approach is the structure of the schema is directly coded into the LV parsing VIs. So, if the schema changes, so must your code. For my applications this is usually not a big issue since we design schemas up front that are flexible for the design goal of the application.

    However, this certainly is not a good generic solution for LabVIEW and XML. I think the holy grail here is likely some sort of LVOOP implementation that can dynamically traverse an XML file and then build a representation in memory that is simple and clear to access in the rest of the application.

    Another approach is to use the .NET interfaces to interact with the XML file 'live' instead of preloading and parsing it into a LV structure. This may have advantages in some applications, but in my experience I have always wanted the data to be resident in memory and fast to access with standard LV array and cluster tools.

    One caveat with using .NET assemblies to access XML is the confounding documentation (or lack thereof) of how to do this. I have spent many an hour staring at MSDN pages trying to understand how to call or what to call to get the result I wanted. Also, there are some pitfalls to watch out for. For example, if you use the MSXML schema validation methods, you cannot simply close the reference to the XML reader when you are done. Instead you must call the CLOSE method in the XMLValidatingReader and the XMLReader before you close their references. If you don't then the OS will keep a file lock on the XML file until LV is closed.

    Once I complete my presentation I will try to remember to post the PowerPoint and VIs here for other to view. In the meantime, I would be happy to share my code with anyone who is interested.

    I also strongly recommend looking at the w3schools website (http://www.w3schools.com/xml/default.asp) to learn more about XML.

    Also, the best tool I have found for editing XML and creating schema is XMLSpy. It is not cheap, but it has paid for itself many times over on my projects.

    -John

  20. QUOTE(Norm Kirchner @ Dec 18 2007, 12:53 PM)

    Where are you getting your info about TestStand from? TestExecutive 1.0 spec sheets?

    We use a very customized DB schema and use TS to log to it, albeit through LV code and not the native TS DB stuff

    Multiple parallel tests on multiple DUTs.... have you heard of the batch process model or the parallel model?

    Not trying to be a smart-######, but after using TS for 2 years now I have a lot of faith in how much it can do. (first hand experience too)

    TS is really an open environment and I would find it hard to think of something that you can't configure it to do. (except streamline memory usage easily)

    ~,~

    PS You can execute code on remote PCs from TS too

    I admit that I have not looked at test stand in the last 2 years, so things might have changed. But I just took another look at what is on the NI site about parallel testing and could not find a single reference to the ability to run multiple tests at the same time on the same DUT. They did show how you can run more than one DUT at once, but every example showed the tests for a given DUT running in sequence. Unfortunatly that does not meet our requirements. Also, I need to not only log test results to a database but also pull all test sequences from the database using a plan/suite concept and log the start and completion events of every test step as they happen. So, while they state their interface is customizable, I would be suprised if it could meet all those requirements.

    But, I could be wrong. So, if you know of some documentation or white papers that address the parallel test issues, please post the links. Maybe I could use that on my next project.

  21. Create a system multi-column listbox. Fill in several rows with some data.

    Set the properties Active Cell and Edit Position to a row in the middle of the ones you populated.

    Set the value to that same row.

    Set the key focus to true.

    Create a simple while loop to keep the VI running after all these setting are applied.

    Run the VI. The row you set should be selected.

    Use the up/down arrow keys to move to another row more than 2 rows away.

    Stop the VI

    Run the VI again.

    Use the arrow keys to move the selection again. Instead of moving relative to the programmatically selected row that is highlighted, it will jump to the row that was (up/down) from the row you moved to right before you last stopped the VI.

    Why does it do this? Why does it remember the last selection you navigated to? How can I override this so the selection I set programmatically is the one it will move relative to when the use uses the arrow keys?

    Is this a bug?

    -John

    Download File:post-2411-1198024019.vi

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.