Jump to content

ShaunR

Members
  • Posts

    4,871
  • Joined

  • Days Won

    296

Everything posted by ShaunR

  1. Easiest way. Create a buffer (U8 array) when you initialise the DLL event reference. (pass in an already initlaised array of bytes using the Initialise Array.vi). In the callback function; dereference the data and memcopy to the buffer you allocated at init. Then PostLVUserEvent the array. No mutexes required because memcopy is atomic and LabVIEW has its own managed memory copy. You can read the array directly out of the terminal of event structure. There is no way to get out of requiring a C/C++ DLL to do this mediation for you, by the way - LabVIEW cannot create C/C++ callbacks If you can find a .NET version then you can have a VI callback function IF they properly advertise the event, though.
  2. You can't create C/C++ callbacks directly in LabVIEW but you can create a DLL or wrap an existing DLL that registers for the callback and generates a LabVIEW event.
  3. ShaunR

    VIM Demo-HAL

    I was meaning using the control label like the following so it uses the same semantics as the event primitives.. The only reason I had a text control with the name in the text was because the VIM wouldn't transfer the label name inside it's macro so I couldn't read it. Otherwise I would have done so. You can do this with Xnodes though I think. Hint: Use messages "UI>MOTION>SET POSITION>x,%.3f", "UI>MOTION>SET POSITION>y,%.3f", "UI>MOTION>SET POSITION>z,%.3f" and have one "MOTION" service instead of having 3 actors.
  4. Version 0.2

    280 downloads

    This is an experimental demo to investigate VIMs (vi macros). It was a bit of fun to see if VIMs could be used to encapsulate events in LabVIEW which was a bugbear of mine for quite some time. You can see the entire thread here. VIMs are a NI experimental technology similar to Xnodes but less mature. The purpose of this release is to clarify the previously unstated licence since other forum topics are building on the original demo so they need a permissive licence (MIT). This release serves as an unambiguous statement to that effect. There are a few differences from the original which I have decided to call version 0.1 but they are minor. Note: This may or may not work for you out of the box. If it doesn't then please do not post. The purpose is to clarify the licence for others to build upon; not to provide a working example. The VIM technology is itself experimental and unsupported by NI so most issues you will encounter will be due to this and It is unlikely there will be another version posted here.
  5. Name: VIM HAL Demo Submitter: ShaunR Submitted: 25 Sep 2015 Category: *Uncertified* LabVIEW Version: 2009License Type: MIT This is an experimental demo to investigate VIMs (vi macros). It was a bit of fun to see if VIMs could be used to encapsulate events in LabVIEW which was a bugbear of mine for quite some time. You can see the entire thread here. VIMs are a NI experimental technology similar to Xnodes but less mature. The purpose of this release is to clarify the previously unstated licence since other forum topics are building on the original demo so they need a permissive licence (MIT). This release serves as an unambiguous statement to that effect. There are a few differences from the original which I have decided to call version 0.1 but they are minor. Note: This may or may not work for you out of the box. If it doesn't then please do not post. The purpose is to clarify the licence for others to build upon; not to provide a working example. The VIM technology is itself experimental and unsupported by NI so most issues you will encounter will be due to this and It is unlikely there will be another version posted here. Click here to download this file
  6. ShaunR

    VIM Demo-HAL

    Sounds good. I'm not really suprised you are having these difficulties. When you consider others talking about identifying which features are available in which LabVIEW versions, you know backwards compatibility is broken..So far, no-one has revealed what the common sets are, preferring to play with the latest and greatest. With your multiple axis motion controller; I disagree. In the service model you would just have a Motion Controller Service and send it X, Y and Z commands or more likely "TABLE>MOVE>PRESET1" or similar. There should be no reason why you cannot have multiple versions of the same type unless I'm missing something. Only the name string is important and that is derived from the constant label (this is how queues work after all).. The constant's name should dictate the event name, not the data type Even if you do name them the same it is basically a no-op for whichever call gets there second. The events are global so two events with different data-types that have the same name is not desirable or needed and would have so many caveats as to make them unusable, IMO. So I'm not really sure what issue you are trying to solve by adding the terminal back in. That just seems to be the motto for Xnodes A good improvement on the last one, though. Before, It had the same problems as the VIM and then some. It worked out of the box this time for me so thats great.
  7. I think you are probably looking at it slightly awkwardly. You went for compartmentalised solution according to some best practice ideology and then found it got awkward for concurrent processes. You want it to be easy to manage and scalable, and the way you did it was exactly that, but the downside was creating bottlenecks for your asynchronous processes. You have the SOUND.vi problem whereby I needed to be able to play arbitrary asynchronous WAV files simultaneously when a single sound could be 10s of seconds. If the SOUND.vi could only process one message at a time that was useful but I wanted a more generic solution. Sol. I made SOUND.vi a service controller.Other processes ask it to "PLAY"a particular file. It then launches a sub process for that file and returns immediately having no more involvement in the data transfer. How could this work with, say, TDMS? You have the FILE service. You send it a message like FILE>STREAM>TDMS>myfile[.tdms]. The FILE service launches a listener that is registered for "myfile" messages. You tell the DAQ service to launch an acquisition - DAQ>STREAM>AI1>myfile. or similar And that's it! The DAQ pumps out messages with the label "myfile" and the listener(s) consume them. Of course the corollary is you can use "FILE>STREAM>TEXT", "FILE>STREAM>BIN" etc, even at the same time .and you still have your FILE>WRITE and FILE>READ which you don;t really have to launch as sub processes. You've started a producer and consumer and connected their messaging pipes. You can do that as many times as you like and let them trundle on in the background. Your other "plugins" just need to send the right messages and listen in on the conversation (also register for myfile).
  8. I have something that I use quite a bit for many things but I don't think I have anything as sophisticated as you would be requiring. It's like a trackable "completeness" application-how complete a project is. It checks descriptions are filled out, whether VIs are orphans, re-entrant and lots of other things for keeping a track of a projects progress and making sure certain standards are met. You can compare previous project scans and do diffs of the changes in issues. It has plugins that can access its database so you can extend its features pretty much indefinitely-I've been abusing it recently by adding scripting functions to set VI names, making them re-entrant, and other bits and pieces which it shouldn't really be able to do for a passive analyzer.. It doesn't do testing as such but it supports plugins so you could create a plug in or two to populate its database with results or attach another database to do cross DB queries. It also allows in-place SQL queries so you could also define views of your test data combined with all the other VI information. There is already a plugin for requirements coverage ala Requirement Gateway. Its one of those tools you always use but would be a nightmare to productionse and could cause havoc in the wrong hands.There is an image on LavaG somewhere
  9. I was trying to decide how I would describe the difference between an API and a Service succinctly and couldn't really come up with anything. API stands for Application Programming Interface but I tend to use it to describe groupings of individual methods and properties-a collection of useful functions that achieve no specific behaviour in and of themselves - "a PI". Therefore, my distinguishing proposal would be state and behaviour but Applications tend to be stateful and have all sorts of complicated behaviours so I'm scuppered by the nomenclature there.
  10. Well. Seeing as your multicast address starts with 235, I would say probably not. However, I avoid Linux whenever possible so I cannot help much further than saying what the net address is for because it will depend on how you set up the network cards and firewalls in all the layers (including Windows).
  11. The net address is for the address of your network card and usually only used if you have multiple cards installed in the system so you can bind to a particular card. You have quite a stack of network virtualisation there.You'll probably have to set up routing to forward UDP multicast packets from your router.
  12. I get the feeling we are talking at cross purposes. All file reading and writing must go through the OS (unless you have a special kernel driver) so I don't really know what you are getting at.
  13. I'm saying let them write it as a service and co-opt it for your reuse libraries/services if it looks interesting and useful If a facility doesn't exist, someone has to write it. Software doesn't spontaneously come into being because you want it. Well. Not unless you are the CEO. So look at my FILE vi again. It opens a file and sends the contents to whoever requests it. The FILE.vi does not care about the file itself, it's structure or what the bytes mean but it does require it to be a "normal" file with the open and close. The FILE.vi can read a lot of files for most scenarios (config, INI, binary, log files etc) but it cannot currently read TDMS files because they need a different procedure to access them and TDMS aren't required for this demo.. Can I add it to the FILE.vi? Sure I can. I can put the code in the FILE.vi then other modules just use the message FILE>READ>TDMS>filename. Do I want to? Maybe, if I think open/read/close of a TDMS is useful. I could also create a STREAM service that may have a state machine (see the TELEMETRY.vi for a producer consumer state machine) and allowing other module writers to access that via it's API. (STREAM>WRITE>TDMS>filename etc) Now I have another service in my application toolkit that I can add to TELEMETRY, FILE, DB, SOUND etc to make any other applications. Maybe I do both You will notice that either way. The other modules/services only care about the message, not the code that actually does the work or where that code is (It could be on another computer!) and I can partition the software within my application as I see fit without interference from other modules/services. I can also add more APIs and more commands to a single API without changing backward compatibility (within reason) Saying all that. Maybe your use case requires a different architecture. There is no one size fits all no matter how much framework developers would like theirs to be.
  14. I'll refer you to my original comment Every so often go through the modules that others are creating and see what is useful for reuse and add it to your core services.
  15. No. There is only one that supplies the FILE service and.It doesn't matter where you put it or how you load it. Plonking it on the diagram of the Main.vi is just a way to load it that can be easily seen and recognised. Yes but you don't have to. That is just an implementation detail of the messaging framework I use. Each service is controlled with a queue and data is retrieved via an event. That is the module strategy. The tactical choice of pure string messaging breaks the cohesion between modules and makes the messaging system network agnostic.The use of queue names is an implementation choice to achieve the latter. The services are themselves "plugins". You expand the system and define the systems' operation by the "plugins" you create and load for it. This topology is of the "File IO be handled by a central actor" category so there is only one and all other modules query it directly or listen for data being emitted by it. It is like your current system without the cohesion problem that you are suffering. Putting a copy in everything is a really bad idea I get the impression you looked at the demo source only. Probably because all the events were broken due to the VIM. That's a shame really because you lose all the context and see each module in action and how they interact.
  16. I switched to service oriented a while ago which is the premise of what you are pondering. You can see a simple example in the VIM demo along.along with an EDSM. You will note a couple of services, one of which is FILE that enables other modules to access basic read functionality and the SOUND that, well, plays the sounds . Error logging is another that lends itself to this topology and in real systems I also have a Database service that comes in very handy.. The way things usually pan out is you have a set of services that provide a core functionality and supplemental modules can use them if they want. You define an API that other modules can use so they don't have to implement everything themselves.Looking at it that way, there is no presupposition that any module is functionally complete, only that it is a provider of certain features if you decide to use them. No one is forced to, but it is advantageous to do so., If a service gets too heavy, split it out into a couple of services. The module layout doesn't matter, only the message API interface does. Because each service is a self contained module and all interaction is via its message interface, you can transplant them into other applications or expand the feature set, as you can see I do by adding TCPIP here..
  17. Just as an afterthought. SQLite supports RTree spatial access methods too Maybe relevant to your particular use case.
  18. You are in the wrong stage of the process. If you are at the bidding stage, then you will be creating a proposal. That proposal becomes the specification after some back and forth and sit-down meetings. The supplier always wins the terms and conditions war as well as the final specification document. You obviously haven't gotten to the trick of making them adopt your specification by marking and amending your proposal. Anyway. This is all somewhat relevant but a distraction.. We are talking, at this stage, of taking a precise, well defined document and doing what they do in the exams. If we produce a method of translating all NI CLA specifications into exam results (which I have sort of done already, so know it is possible) We can discuss natural language heuristics later for general use cases. Don't throw the baby out with the bathwater.
  19. There is a benchmark in the SQLite API for LabVIEW with which you can simulate your specific row and column counts and an example of fast datalogging with on-screen display and decimation. The examples should give you a good feel whether SQLite is an appropriate choice. Generally. If it is high speed streaming to disk (like video) I would say TDMS. Nothing beats TDMS for raw speed. For anything else; SQLite* What is your expected throughput requirement?
  20. For a while now I've been mulling over a gap in what I see as software in general. This has nothing to do with LabVIEW, per se, but it is the reason we need CLAs and System Engineers to translate what the customer wants into what the customer gets. A good example of this is the CLA exam. There, we have a well written, detailed requirements specification and a human has to translate that into some "stuff" that another engineer will then code. So why do we need an engineer to translate what amounts to pseudo code in to LabVIEW code? Maybe 10-15 years ago (before scripting was a twinkle in the milkman's eye), I had a tool that would scan word documents and output a text file with function names, parameters and comments and this would be the basis for the detailed design specification. I would create requirements for the customer with meetings and conversations and generate a requirements specification that they could sign off in Microsoft Word. Unbeknownst to the customer, it had some rather precise formatting and terminology. It required prose such as "boolean control" and "Enumerated Indicator" It also had bold and italic items that had specific meaning - bold was a control/indicator name. Italic was a function or state . It was basically pseudo code with compiler directives hidden in the text. Roll forward a few years and people were fastidious about getting CLD and CLA status. Not being one of those I looked at the CLD exam and saw that a big proportion of the scoring was non functional. By that I mean making sure hints and descriptions are filled in etc - you know, the stuff we don't actually do in real life. So I wrote a script that read the exam paper (after exporting to text), pulled out all the descriptions and filled in all the hints, labels and descriptions. It would probably take 5-10 minutes recreating it in an exam but ensure 100% of the score for that part of the test (this later became Passa Mak, by the way). So that got me thinking, once again, about the CLA exam and the gap in technology between specification and code. I have a script that takes a text file and modifies some properties and methods. It's not a great leap to actually scripting the "stuff" instead of modifying its properties. I don't have the Word code anymore, but should be able to recreate it and instead of spitting out functions, I could actually script the code. We could compile a requirements specification! If not to a fully working program, at least so that an engineer could code the details. Doesn't that describe the CLA exam? So I looked at an example CLA exam. Woohoo. Precise formatting already .......to be continued.
  21. Watch out for the backporting updates for the Windows 10 spyware telemetry!
  22. Instead of trying to replace the AsynCall. what about replacing the static reference with your xnode so it produces the correct ref type? You'd only have to react to the VI drop.
  23. ShaunR

    VIM Demo-HAL

    Oh yes. Nearly forgot Here is the TCP Telemetry VI that fits in that space on the main diagram that I spoke about in the other thread. TELEMETRY.vi Just drag the VI from explorer and plonk it in the gap in the services-job done. (I suggest you place the VI itself in with the rest of the subsystems, but it's not a requirement for it to work). Whats that? It doesn't work? It doesn't do anything? Aha! That's because you haven't connected to it. Oh, alright then. Here's a simple client to make a connection. Run it and see the candy. TCPIP Telemetry Client.vi
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.