-
Posts
4,914 -
Joined
-
Days Won
301
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by ShaunR
-
I have something that I use quite a bit for many things but I don't think I have anything as sophisticated as you would be requiring. It's like a trackable "completeness" application-how complete a project is. It checks descriptions are filled out, whether VIs are orphans, re-entrant and lots of other things for keeping a track of a projects progress and making sure certain standards are met. You can compare previous project scans and do diffs of the changes in issues. It has plugins that can access its database so you can extend its features pretty much indefinitely-I've been abusing it recently by adding scripting functions to set VI names, making them re-entrant, and other bits and pieces which it shouldn't really be able to do for a passive analyzer.. It doesn't do testing as such but it supports plugins so you could create a plug in or two to populate its database with results or attach another database to do cross DB queries. It also allows in-place SQL queries so you could also define views of your test data combined with all the other VI information. There is already a plugin for requirements coverage ala Requirement Gateway. Its one of those tools you always use but would be a nightmare to productionse and could cause havoc in the wrong hands.There is an image on LavaG somewhere
-
Should File IO be handled by a central actor?
ShaunR replied to AlexA's topic in Database and File IO
I was trying to decide how I would describe the difference between an API and a Service succinctly and couldn't really come up with anything. API stands for Application Programming Interface but I tend to use it to describe groupings of individual methods and properties-a collection of useful functions that achieve no specific behaviour in and of themselves - "a PI". Therefore, my distinguishing proposal would be state and behaviour but Applications tend to be stateful and have all sorts of complicated behaviours so I'm scuppered by the nomenclature there. -
Well. Seeing as your multicast address starts with 235, I would say probably not. However, I avoid Linux whenever possible so I cannot help much further than saying what the net address is for because it will depend on how you set up the network cards and firewalls in all the layers (including Windows).
-
The net address is for the address of your network card and usually only used if you have multiple cards installed in the system so you can bind to a particular card. You have quite a stack of network virtualisation there.You'll probably have to set up routing to forward UDP multicast packets from your router.
-
Should File IO be handled by a central actor?
ShaunR replied to AlexA's topic in Database and File IO
I get the feeling we are talking at cross purposes. All file reading and writing must go through the OS (unless you have a special kernel driver) so I don't really know what you are getting at. -
Should File IO be handled by a central actor?
ShaunR replied to AlexA's topic in Database and File IO
I'm saying let them write it as a service and co-opt it for your reuse libraries/services if it looks interesting and useful If a facility doesn't exist, someone has to write it. Software doesn't spontaneously come into being because you want it. Well. Not unless you are the CEO. So look at my FILE vi again. It opens a file and sends the contents to whoever requests it. The FILE.vi does not care about the file itself, it's structure or what the bytes mean but it does require it to be a "normal" file with the open and close. The FILE.vi can read a lot of files for most scenarios (config, INI, binary, log files etc) but it cannot currently read TDMS files because they need a different procedure to access them and TDMS aren't required for this demo.. Can I add it to the FILE.vi? Sure I can. I can put the code in the FILE.vi then other modules just use the message FILE>READ>TDMS>filename. Do I want to? Maybe, if I think open/read/close of a TDMS is useful. I could also create a STREAM service that may have a state machine (see the TELEMETRY.vi for a producer consumer state machine) and allowing other module writers to access that via it's API. (STREAM>WRITE>TDMS>filename etc) Now I have another service in my application toolkit that I can add to TELEMETRY, FILE, DB, SOUND etc to make any other applications. Maybe I do both You will notice that either way. The other modules/services only care about the message, not the code that actually does the work or where that code is (It could be on another computer!) and I can partition the software within my application as I see fit without interference from other modules/services. I can also add more APIs and more commands to a single API without changing backward compatibility (within reason) Saying all that. Maybe your use case requires a different architecture. There is no one size fits all no matter how much framework developers would like theirs to be. -
Should File IO be handled by a central actor?
ShaunR replied to AlexA's topic in Database and File IO
I'll refer you to my original comment Every so often go through the modules that others are creating and see what is useful for reuse and add it to your core services. -
Should File IO be handled by a central actor?
ShaunR replied to AlexA's topic in Database and File IO
No. There is only one that supplies the FILE service and.It doesn't matter where you put it or how you load it. Plonking it on the diagram of the Main.vi is just a way to load it that can be easily seen and recognised. Yes but you don't have to. That is just an implementation detail of the messaging framework I use. Each service is controlled with a queue and data is retrieved via an event. That is the module strategy. The tactical choice of pure string messaging breaks the cohesion between modules and makes the messaging system network agnostic.The use of queue names is an implementation choice to achieve the latter. The services are themselves "plugins". You expand the system and define the systems' operation by the "plugins" you create and load for it. This topology is of the "File IO be handled by a central actor" category so there is only one and all other modules query it directly or listen for data being emitted by it. It is like your current system without the cohesion problem that you are suffering. Putting a copy in everything is a really bad idea I get the impression you looked at the demo source only. Probably because all the events were broken due to the VIM. That's a shame really because you lose all the context and see each module in action and how they interact. -
Should File IO be handled by a central actor?
ShaunR replied to AlexA's topic in Database and File IO
I switched to service oriented a while ago which is the premise of what you are pondering. You can see a simple example in the VIM demo along.along with an EDSM. You will note a couple of services, one of which is FILE that enables other modules to access basic read functionality and the SOUND that, well, plays the sounds . Error logging is another that lends itself to this topology and in real systems I also have a Database service that comes in very handy.. The way things usually pan out is you have a set of services that provide a core functionality and supplemental modules can use them if they want. You define an API that other modules can use so they don't have to implement everything themselves.Looking at it that way, there is no presupposition that any module is functionally complete, only that it is a provider of certain features if you decide to use them. No one is forced to, but it is advantageous to do so., If a service gets too heavy, split it out into a couple of services. The module layout doesn't matter, only the message API interface does. Because each service is a self contained module and all interaction is via its message interface, you can transplant them into other applications or expand the feature set, as you can see I do by adding TCPIP here.. -
Just as an afterthought. SQLite supports RTree spatial access methods too Maybe relevant to your particular use case.
-
You are in the wrong stage of the process. If you are at the bidding stage, then you will be creating a proposal. That proposal becomes the specification after some back and forth and sit-down meetings. The supplier always wins the terms and conditions war as well as the final specification document. You obviously haven't gotten to the trick of making them adopt your specification by marking and amending your proposal. Anyway. This is all somewhat relevant but a distraction.. We are talking, at this stage, of taking a precise, well defined document and doing what they do in the exams. If we produce a method of translating all NI CLA specifications into exam results (which I have sort of done already, so know it is possible) We can discuss natural language heuristics later for general use cases. Don't throw the baby out with the bathwater.
-
There is a benchmark in the SQLite API for LabVIEW with which you can simulate your specific row and column counts and an example of fast datalogging with on-screen display and decimation. The examples should give you a good feel whether SQLite is an appropriate choice. Generally. If it is high speed streaming to disk (like video) I would say TDMS. Nothing beats TDMS for raw speed. For anything else; SQLite* What is your expected throughput requirement?
-
For a while now I've been mulling over a gap in what I see as software in general. This has nothing to do with LabVIEW, per se, but it is the reason we need CLAs and System Engineers to translate what the customer wants into what the customer gets. A good example of this is the CLA exam. There, we have a well written, detailed requirements specification and a human has to translate that into some "stuff" that another engineer will then code. So why do we need an engineer to translate what amounts to pseudo code in to LabVIEW code? Maybe 10-15 years ago (before scripting was a twinkle in the milkman's eye), I had a tool that would scan word documents and output a text file with function names, parameters and comments and this would be the basis for the detailed design specification. I would create requirements for the customer with meetings and conversations and generate a requirements specification that they could sign off in Microsoft Word. Unbeknownst to the customer, it had some rather precise formatting and terminology. It required prose such as "boolean control" and "Enumerated Indicator" It also had bold and italic items that had specific meaning - bold was a control/indicator name. Italic was a function or state . It was basically pseudo code with compiler directives hidden in the text. Roll forward a few years and people were fastidious about getting CLD and CLA status. Not being one of those I looked at the CLD exam and saw that a big proportion of the scoring was non functional. By that I mean making sure hints and descriptions are filled in etc - you know, the stuff we don't actually do in real life. So I wrote a script that read the exam paper (after exporting to text), pulled out all the descriptions and filled in all the hints, labels and descriptions. It would probably take 5-10 minutes recreating it in an exam but ensure 100% of the score for that part of the test (this later became Passa Mak, by the way). So that got me thinking, once again, about the CLA exam and the gap in technology between specification and code. I have a script that takes a text file and modifies some properties and methods. It's not a great leap to actually scripting the "stuff" instead of modifying its properties. I don't have the Word code anymore, but should be able to recreate it and instead of spitting out functions, I could actually script the code. We could compile a requirements specification! If not to a fully working program, at least so that an engineer could code the details. Doesn't that describe the CLA exam? So I looked at an example CLA exam. Woohoo. Precise formatting already .......to be continued.
-
LabVIEW EXE Running on a $139 quad-core 8" Asus Vivotab tablet
ShaunR replied to smarlow's topic in LabVIEW General
Watch out for the backporting updates for the Windows 10 spyware telemetry! -
Is it possible to pass a static VI reference into Start Asynchronous Call?
ShaunR replied to JKSH's topic in LabVIEW General
Instead of trying to replace the AsynCall. what about replacing the static reference with your xnode so it produces the correct ref type? You'd only have to react to the VI drop. -
Is it possible to pass a static VI reference into Start Asynchronous Call?
ShaunR replied to JKSH's topic in LabVIEW General
You have the skills for a VIM, though -
libvlc-new-always-return-null
-
Oh yes. Nearly forgot Here is the TCP Telemetry VI that fits in that space on the main diagram that I spoke about in the other thread. TELEMETRY.vi Just drag the VI from explorer and plonk it in the gap in the services-job done. (I suggest you place the VI itself in with the rest of the subsystems, but it's not a requirement for it to work). Whats that? It doesn't work? It doesn't do anything? Aha! That's because you haven't connected to it. Oh, alright then. Here's a simple client to make a connection. Run it and see the candy. TCPIP Telemetry Client.vi
-
Nice. Now if only VIMs could do that There was a dependency on an OpenG function. I've replaced it with a native LabVIEW one. SREventXnode.zip Are you thinking about putting the xnode in the CR?
-
Hard Drive Serial Number
ShaunR replied to alexp123's topic in Application Builder, Installers and code distribution
This seems to work with a standard user on Windows 7/8 but like I said. Your mileage may vary. What windows are you using? -
Hard Drive Serial Number
ShaunR replied to alexp123's topic in Application Builder, Installers and code distribution
2009. Sheesh. Its true what they say about software - you support it for life I had a query to ask if I had the 64 bit version of Diskinfo. I didn't because it was written in Delphi 7 which only had a 32 bit compiler. but I did pull out the WMI part from another of my little side projects and recreated DiskInfo (for 32 and 64 bit Windoze) . Your mileage may vary. Have fun. -
And just to follow on from Rolph wise words. If you are using LabVIEW x32, you don't care about 64 bit DLLs because you can't use them and yes, you are fine working with the 32 bit DLL in 32 bit LabVIEW on a 64 bit Windows machine.. It just if you are developing DLLs that it is a. consideration because if you supply only a 32 bit DLL, people with 64 bit LabVIEW can't load the DLL in a CLFN. As to where to place them. Well. I place them where I damn well choose and always in the same directory as the application and with my own name (usually namex32.dll and namex64.dll if they have to coexist) because, well, it just save lots of hassle with DLL hell.
-
That's the 32 bit directory on a Win64 platform. What is the LabVIEW bitness that you created the Executable with?
-
LabVIEW 32 bit can only load 32 bit dlls regardless of your platform bitness - although you can't install LabVIEWx64 on Win32. You don't need two computers, you need two LabVIEW versions to compile both 32 and 64 bit executables that will then use the 32 and 64 bit dll respectively. That's why we use the conditional disable because only the end developers' (not the end users') bitness counts for when they compile the executable. Bottom line: Executables are either 64 OR 32 bit and each can only use the corresponding DLLs.of that bitness For the OP. I expect the paths are not what you're expecting them to be or you are missing some dependencies of the DLL.