Jump to content

Tomi Maila

Members
  • Posts

    849
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by Tomi Maila

  1. I agree. I've just seen this trick in multiple examples here in LAVA and just wanted to make a point that nobody should really use it in any serious application. This issue reminds me of a kind of educative story of the importance of proper error handling. A guy working for Nokia had a backup software to back up his workstation to a network backup server. Well, the IT administrators decided to change the IP-address of the backup server, unluckily nobody told the guy who used it for backing up his workstation. The backup software never gave any error message that the server couldn't be reached. One day the hard drive of the workstation broke down and the guy started to recover his system from the backup server just to notice that there actually were no backups. Luckily nobody of use makes such stupid applications, right
  2. It seems that the DLL import wizard cannot manage the case. Create cluster in LabVIEW that consumes exactly the same amount of memomry as the C structure. Pass this cluster as a parameter to the library and use Adapt to type for the input type. I think for value strucures use Handles by Value specifier and for pointer structures use Pointers to Handles specifier. When creating the cluster note that strings and arrays are stored differently in LabVIEW and in C so don't use strings and arrays in your cluster. You may try to use the following conversions: C -> LabVIEW int, long -> I32 unsigned int, unsigned long -> u32 short -> I16 unsigned short -> U16 char -> i8 unsigned char -> u8 long long -> i64 unsigned long long -> u64 char[N] -> a cluster with N times U8 constants So for your example struct create a cluster with the following elements cluster of 32 times U8 or a cluster of 4 times U64 I16 I32 a cluster of three U8 Tomi
  3. I took a look at the built-in FFT method shipping with LabVIEW. It opens a VI reference to itself and never uses the reference for anything. Nor does it close the reference properly. Any idea why FFT opens the reference to itself? Is the reference used somehow directly from C code to increase performance? Tomi
  4. LVPunk, your lower loop is not safe. If an error occurs in the queue flushing node, the loop exits but doesn't return an error so that the user could react in any way. If you are using LV 8.20 I suggest you create a decimating queue class that would work identically to a queue but that would automatically decimate the data written into the queue. Tomi
  5. Would you like to download evaluation version of Math Kernel and test the performance. I'd be interested in the results, especially if you have a Core 2 duo processor available somewhere. It should be failry easy to integrate the library with LabvIEW. We can help you with the integarion issues. Tomi
  6. I'm sitting in a cafe by the Baltic Sea drinking my cafe Latte and reading the forums with my laptop. This is a place where I come when I need to design my software and today is one of those days. The sun is shining and the sea is frozen. It's -20 C (-4 F) cold but still the combination of pure white snow and bright sun shine makes it an excellent weather. To the topic. I think that error system that is based on error codes and source strings is not enough for modern applications and there needs to be a better system. However we don't want to loose backward compatibility. Therefore I propose the following. Let's use the source field of error cluster to pass both multiple errors and any "object" that is actually carrying the error related information. With object I denote to any LabVIEW type both classic by-value types and LVClass objects. The idea is that when error occurs, user can throw an error object, that is any data, that would be flattened to string and appended to the source field of error cluster. The developer can then try to catch these errors by matching the type of the object to some sample types. LabVIEW would provide primitives for throwing and catching errors. Classic errors could be thrown and catch using the classic error cluster as an object type. If the object thrown would be a LVClass object, then a more general class could be used to catch a more specific class error i.e. the type match doesn't need to be exact but parent type would match a child type error. I wrote a draft (below) of what I'd like source field of the error cluster to be like. The error cluster would be able to hold one or more error objects of arbitrary LabVIEW type. I use notation that is similar to BNF grammar, I assume that most of you are not familiar with BNF grammar so I try to explain the notation shortly below. I also think it could be a good idea to combine the error objects I proposed with an easy mechanism to select one of the built-in error classes to throw and specify the content of the class. Furthermore it would be a good idea to allow user to browse trough built-in error classes when writing an error handler. So here is my second proposal. Include a new tool "Error Specifications" to LabVIEW that looks and feels like LabVIEW options dialog. Instead of the category tree on the left there would be a LabVIEW error class hierarchy from which user could select one of the build-in error classes to throw. When an error is selected from the tree, the right hand side of the window would show error specification options for this particular error. There would be a new express node or XNode (or similar) "Throw Error Express" that would open this dialog when dropped to block diagram. The developer could interactively specify the error to throw. The node would have optional inputs for parameters that the developer cannot interactively specify but that needs to be specified by wiring the runtime data to the node. When this node is executed, it would add a new error object to the error cluster according to the interactive specifications. The performance of this new node is not that important as it would only be executed when an error has already occurred. For catching errors a new stacked diagram would be needed or rather stacked sequence needs to be modified accordingly. The idea is that stacked sequence could have shift registers that allow passing data from a frame to the next frame so that user doesn't need to add sequence locals for each frame separately. The first frame could be used as a "try statement". The next frames can be used for handling errors. Error cluster could be wired to such a new kind of shift register of a stacked sequence. Each frame would be responsible for handling one particular type of errors. This makes the code clearer to read as the error catching cases are hidden and don't take 80% of the space in each block diagram. To increase the performance user could force the sequence to ignore the error frames if error has not occurred. Perhaps this could be a default behaviour if an error cluster is wired to a shift register of such a sequence. Color coding could indicate this default behavior in a similar way than when a error cluster is wired to a case structure. Furthermore for catching the errors a new interactive express primitive would be needed. This primitive would catch an error user has interactively specified by selecting one of the build in classes from error class hierarchy. If error would be catch this node would return the catch object together with "error catch" boolean that can be used to control if further actions need to be made. This node would execute very fast if no errors have occurred. In addition to these two new express nodes there needs to be more advanced built-in primitives for throwing and catching errors that the new express nodes would rely on. The error hierarchy browser of the new express nodes needs to be extensible so that developers can add new errors to the hierarchy and distribute these additions somehow together with their libraries. Perhaps a new directory error.lib could be used for user defined error classes that would be search by the interactive tool. For the new error system to be helpful, existing built-in functions and libraries should be modified so that they would throw these modern errors. General error handler and user defined error handlers could still handle them but a modern error handler would do it better. Second a well designed hierarchy of error classes is needed for the model to be useful. When designing the hierarchy, some sort of dialog or probe method in the "general error class" is needed so that developers could create their own error dialogs easily. Tomi ================= Modern Error Handling Specifications DRAFT ==================== x y denotes x followed by y [x] denotes zero or one occurrences of x. {x} denotes zero or more occurrences of x x | y denotes one of either x or y 'a' denotes literal value a String denotes any text string without '<' character Binary flattened binary string of anything Int denotes i64 integer in binary format UInt denotes u64 integer in binary format === The specification for the source field of error cluster === SourceField ::= Classic | Modern === The specification of the classic error, that is the present format of error clusters === Classic ::= SourceString [Reason | Details] SourceString ::= String Reason ::= '<err>' String Details ::= '<append>' String === The specification of modern errors that could coexsist with classic errors === Modern ::= FirstSource '<objects>' ObjectCount FirstErrorObject {ErrorObject} FirstSource ::= SourceString ObjectCount ::= UInt === Specification of error objects that could be thrown with the error cluster === === and later catched by matching with different types === FirstErrorObject ::= ErrorObject ErrorObject ::= ObjectLength BinarySourceString Anything BinarySourceString ::= StringLength String Anything ::= StringLength Binary StringLength ::= UInt
  7. My Dell D820 laptop with intel Core 2 Duo T7200 & 4GB memory does the same thing in average about 30 us if I run the real FFT in four parallel loops under LV 8.20. So if the FFT is the only task running, my laptop would analyze 55 of your images per second. As T7200 is not even the fastest processor available, you should achieve your goal by simply buying a new better computer. If you need a laptop T7600 is the best available processor for the task. If you can use workstation intel core QX6700 is the best for the job. With best I mean best performing. Tomi
  8. My first suggestion for you is: Download VIPM from http://jkisoft.com/vipm/ and install OpenG variantconfig package. The variantconfig package contains tools for writing and reading variant data to and from INI files and it's licensed under BSD license so you can easily use it in your commercial software as well. Tomi
  9. I know you wanted a browser based solution but just in case you don't find a suitable free product just take a look at Microsoft Office Live Meeting.
  10. Seems like this was not very popular topic. I assume we need to forget this then as there would be nobody to follow what NI is patenting besides perhaps me and PJM.
  11. I attatch a handy tool for debugging. It lists all VIs belonging to any LVClass that are currently in memory of any application instance. The VIs are categorized by the context they are open in. Using the tool you can see when a class is loaded to a memory of specific application instance and when it's removed from the memory. Download File:post-4014-1170670103.vi I think the following actions may take us to the goal. Before scripting the code or terminals referring to LVClasses force the NI.LV.XnodeCodeGen to open these classes into it's memory or script a class constant to the diagram. Avoid using variants carrying class constants and originating from another application context to script the terminals (this is the default way of doing it) as variants carry a reference to the class but this reference is located in wrong context. When you are done with scripting the code and the code get's copied to the VI containing the XNode, tro to figure out a way to close all the class references in the NI.LV.XnodeCodeGen context so that the classes don't get locked. Classes get locked when they are open in more than one context simultaneously. The XNode generated code cannot access class private data as the generated code doesn't belong to the class, I think a way around this is to create a common parent class that returns a queue reference that can then be used to get the private data being hold in the queue I was also thinking of a concept of dual objects. These objects could be both by-value and by-reference objects. When the object is in by-reference state, one could sync the content from the queue to the private data members so that the class would then hold both the queue reference and the by-value data. The synchronizing the object would lock the by-reference object. From then on the object would behave as by-value object until it would be synchronized back to the queue and the lock would be released. This would allow faster access to the private data inside class methods that call other class methods as the object could be passed to subVIs in by-value mode instead of by-reference mode. All the class methods would need to be able to detect if the object is in by-value mode or by-reference mode but that could be done using common subVIs or XNodes. The problem with this dual object concept I haven't figured out (besides how to implement the XNodes) is how to get the dynamic dispatch wires to pass trough the synchronizing XNodes without breaking. Now the weekend is over and I've to concentrate on real work that needs to get done. Tomi
  12. I haven't managed to script any XNode generated code that contains user defined LVClasses and that doesn't make LabVIEW either unstable or to crash. So I tried another way to achieve my goal. I used the XNode to script directly to the block diagram of the VI that owns the XNode. The idea was to use the XNode stay on the top of the scipted code so that the scipted code would be invisible or something similar. The problem in this case is that when ever I script something to the block diagram directly, the addition of the new node triggers ability VIs to rerun and the generate code is executed again and again and again. So I end up in a loop of automatic code generation. However perhaps there is a way to detect in which cases the call to the ability VIs is a rerun triggered by the XNode modifying the owner diagram and when it's a real change on the diagram. I'm beginning to be a little frustrated as all the paths I've tried seem to be dead-ends. Perhaps it's time to watch TV for the rest of the evening. Download File:post-4014-1170670103.vi
  13. I go on my lonely discussion. A small step forward and some steps backward - I've managed to place a class constant to the generated code without crashing LabVIEW. It seems that if a class constant is explicitly scripted to the generated code, then the class get's loaded to the memory of other application context as well (NI.LV.XnodeCodeGen?) and LabVIEW manages the situation better. The generated code cannot access private data members, as LV thinks it's not part of the class. I wonder if you can script it to be part of the class, at lest for the time the code is generated. Furthermore the opening the class in a second application context makes the class locked as LabVIEW locks classes that are open in multiple application contexts. This is not very good thing as one is trying to modify a class method by placing an XNode into it.
  14. AQ, thanks very much for your answer!!! :worship: :thumbup: Is there any public source where I can find information about passing LVClass objects between application contexts, either when the application context exist on the same computer or when they are physically distict. What is needed for an object passed form one context to another to become a valid object in the other context. Especially interesting is the case where the runtime type of an object is not know in the sending end, how the proper runtime class is loaded to memory of the receiving context when at compile time one cannot define the runtime type of the object being received, only that it's of some more generic type X. EDIT: I Added the fourth point to the list in my above post (When the XNode generated code is executed, it's executed in the same context as the VI where XNode instance is located.)
  15. I made some further investigations about the application contexts related to XNodes. It seems there are three different application instances related to the issue. The VI which you edit and run are in an application context of their own, normally either Main Application Instance or then project related context. Second the Ability VIs are executed in NI.LV.XNode application context. Third the code is generated to third application instance, which is NI.LV.XnodeCodeGen When the XNode generated code is executed, it's executed in the same context as the VI where XNode instance is located. No wonder this is a though issue to deal with... Tomi
  16. Let's pretend they do exist. Would you AQ think the in such an imaginative situation, NI would change the context where the XNodes would be running in a future release of LabVIEW to be the same where the VI using the XNode is open? Of course XNodes do not exist, but just try to imagine such a situation.
  17. I like this picture, may I use it in my signature... Althoug I wonder if I should use by-value objects instead...
  18. I think I may have a partial solution at least. However I'm too tired to test it right now, it's already past midnight here. The idea is to delegate the tasks that don't work in the application context where XNode runs back to the application context where the VI is open. I tested simply to call such a delegate VI from an ability VI and it seems that at least some tasks can be delegated back to the original application isntance. The image below show how, not very complicated. Simply wire the XRef terminal of ability VIs that need to delegate some tasks to the property node at the left end of the image. This is my contribution for today, I hope somebody has made a leap forward the next time I visit LAVA p.s. The XRef reference didn't seem to work in Initialize ability so don't try to do delegate calls from there. Edit: Only 3 downloads for the challenge souce code. I thought this would be a challenge that everybody wanted to get their hands on immediately Tomi
  19. There is no problems editing the ability VIs, all the problems occur at the runtime of the ability VIs so in that manner they are XNode instance related. First the "Adapt to inputs" ability VIs detect the type of connected wires from the variant that holds a wire type. When I set "object in" and "object out" terminals to adapt to the type that is connected to the "object in" and then connect a user defined LVClass object to an XNode instance, everything starts to get weird. The XNode instance looks ok from the VI where it's embedded, although it's not executable. But the code doesn't get correctly generated. That is the variants carrying the LVClass objects inside the XNode instance must somehow have different meaning than they would have on the normal development application context. Propably the XNode instance application context cannot access the LVClass pointers in the application instance where you are developing you class. Therefore starnge things happen when you try to do this. LabVIEW quite often crashes, so be careful. I tried to typecast the objects to cluster and well anything just to get the queue reference out of the objects. I didn't succeed but this still could be a one way to go. If one succeeds to somehow get the reference to the queue in the application context where the XNode instance is running, then perhaps one can access the queue without explicitly accessing the object. Sounds a bit tricky. Other ideas are welcome as well. I've set the following XNode related INI keys: These set, one can right click on an XNode instance and select XNode Wizard Menu subitem. There menu contains a list of all of the ability VIs. When opened from there, you get the runtime instance and can debug it normally. The "Generated Code" gives you the code that the XNode has generated, or may crash LabVIEW if you using LVClasses inside your XNode. When the code is generated, my "Generat Code.vi" ability also copies the generated code to the clipboard where you can paste it to somewhere else.I hope this helps you forward. And for those NI employees who don't think XNodes exist, try searching LabVIEW help for keyword "XNode"... Tomi Edit: If we don't manage to get this working. There is an option to use XNode as script generators and let the XNode to script directly to the VI containing the XNode. I think it's very much what XNodes exactly does. At least having weird broken hidden wires on the block diagram when an XNode generated code is not working indicates this. So I assume that the generated code is indeed generated directly to the block diagram of the VI where the XNode is located. Then it's hidden simply hidden like hidden controls on the front panel. I wonder if we can detect these hidden wires and nodes using scripting.
  20. Tomi Maila

    Linux vs Windows

    The Windows version can call .NET and ActiveX classes. If you are an academic institution, purchase a Academic department license. It costs $5000 and you get professional version of almost all NI software. LabVIEW Mac/PC/Linux are all included. EDIT: The Academic department license allows anybody in your department to use the licenses for non-commercial projects. Tomi
  21. Hi, We all know the lack of by-reference LVClasses. There has been some community implementations of by-reference LVClasses but they all include files such as "Get data and lock". For each separate class, one needs to create a separate copy of these "embedded subVIs". Furthermore, since method names are not allowed to collide, the name of these embedded subVIs needs to differ in each separate class. This naming issue prohibits one to just save a copy of a template class to create a new class as then subVI names would collide. I've had an idea. Why not to replace the embedded subVIs like "Get data and lock" with XNodes. These XNodes would adapt to each individual class they are used in and there would be no need to embed these XNodes to the class itself. On the contrary they could reside on common location such as user.lib or similar. I've tried to write such an XNode. The problem keeping me from finnishing is that XNodes run in a different application context and therefore they don't know about the open classes in the development context. I attatch my project to this post. There is a XClass.lvproj file which contains a sample class. The class has a control Class Private Data.ctl that defines the class private data members. The class built in control contains a queue reference so that the class would act like a by-reference class. The queue type is defined by Class Private Data.ctl. The class method Get Element Template.vi is what I'd like my XNode to do (see image below). The XNode is in a subdirectory Get Data. It has a few abilities that control how the XNode behave. It can detect when a LVClass in connected to it's object in input and it can adapt to this type. The problem is that this is not a valid type inside the XNode. So the code generation succeeds but the code remains broken. Jim's XNode challenge was a good starter for this project. I give you a little more challenging task, make this XNode work. If this XNode works, then we'd have a very nice implementation of by-reference classes not far away. Tomi Download File:post-4014-1170423749.zip
  22. Then I need to make a product suggestion for at least events that would be automatically routed to other application contexts. EDIT: Below is the suggestion I posted to NI product suggestions. Events and queues no longer work between application contexts in LV 8.20. There should be a method for communicating between application instances or applications on different computers asynchronously. Calling remote VI reference is not enough. What I suggest is that you add a "remote event" functionality to LabVIEW. Any event reference could be passed to a remote application instance. When passing the event instance to the remote end, a network address and application reference would be added to the reference so that remote end knows in which application the event is expected to occur. When such a remote event is then registered at the remote end, the remote end sends information to the event generating end that "I'll be listening to this event." Then the event generating end knows that when ever such an event occurs, it needs to send information to all registered remote applications. The remote applications can then catch the event. Tomi
  23. The container state terminal of Init ability VI in the xcontrol contains a container ref which allows modifying the initial XControl container. Setting a value of property node "Ctl.Indicator" to true makes an xcontrol dropped on front panel to default to indicator. However be careful as init is called also when there is a need to update the XControl to a newer version or when the xcontrol is copied. Tomi
  24. Sounds good to me, would you create a package. I really don't know how. Tomi
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.