Jump to content

odoylerules

Members
  • Content Count

    45
  • Joined

  • Last visited

  • Days Won

    2

Posts posted by odoylerules

  1. Its stuff like this why i don't develop a Windows target and RT Target in the same project any more.  Locked classes/libraries are such a pain.  I'm not sure that will solve your issue since it would still probably recompile on you.  Do you know if the items that are recompiled happen to marked Read Only in the OS?  That's caused me issues before on re-compiles and strange saves.  

     

     

  2.  

     

    I believe that the "wrongness" that @odoylerules mentioned isn't so much about having a global install, but rather having packages added toC:\Program Files\ which goes against current Windows security principles.
     

     

    Well partly yes, i do have an issues with this.  However, i also have an issue with global dependencies and honestly there may not be away around that based on how labview was designed.  I don't have as much experience with Labview as a lot of people here so i'm sure there are lots of use cases where it might be needed.  

     

    However, i think there are two types of labview "packages".  One would be your IDE extensions such as the G-code manager that you linked.  These types of packages work well with VIPM and are good for extending the functionality of the labview IDE.  However, i think project specific dependencies, such as OpenG, should live in the subfolder of the project.  

     

    This would basically mean having a separate copy of OpenG for every project you are working on.  Personally i think this makes sharing code a lot easier than running a VIPC file every time you change projects.  It compartmentalizes project specific dependencies away from the IDE/Program Files location and into a specific place.  It also prevents one project from touching the dependencies of another.

     

    So there may be more downsides than not, i'm not sure, but i think its a better approach than global dependencies.  

    • Like 1
  3. I've considered writing one, however, i have a different philosophy on dependencies and prefer all dependencies not native to labview live underneath the main project file instead of sharing files across projects within user.lib and vi.lib.  Pallets would be stored in the user libraries section of the pallet viewer and all pallets would be re-written to point back to the project specific dependencies directors instead of user.lib and vi.lib.  Something feels really wrong having to rewrite program files directories to maintain dependencies.  

     

    I've also considered using NPM, which is the nodejs package manager, to maintain labview packages instead of vipm in combination with the approach above. This a little more involved as i need to write a script to handle different types of packages and it requires packages to have a package.json instead of a "spec" file inside of them.  So there would be additional work with that.

     

    Anyway, i haven't started anything yet, but i'm also curious what else it out there.  

    • Like 1
  4. My tuppence.

     

    None work great. This comes up at least once a year and always ends the same.

     

    SVN externals work OK. (My preference)

    Git submodules work, sort of. (Git is too complicated for me).

    Checkout in a single directory tree so LV uses relative paths to find sub VIs.

     

    The enemy is linking and re-linking which has never been resolved but .is better than it used to be. Resign yourself to just using text based SCC as binary backup system that you can quickly do a restore or branch and life is good.

     

    LabVIEW has always needed its own SCC system but NI have little interest in providing one. We cope at best but usually suffer.

     

    I've been using other languages too much recently.  I had totally forgotten about this thread and the general Labview name spacing issues.  My dream of project dependencies solely within the project directly may be a dead one until NI changes some things.  Even Git Submodules may not be a good solution b/c of the library linking issues highlighted in that thread. 

  5. I've been thinking about this issue a lot lately and having recently been writing code in Nodejs, I've been considering writing a labview tool that mirrors some of the functionality of the NPM way of handing Nodejs dependencies. 

     

    Unfortunately there are pro's and con's to both of the solutions your outline.  I personally would avoid #1 at all cost.  Checking in re-use code to your project repo is just asking for pain down the road.  If you pursue this route, then i would recommend looking into Git Submodules.  However, even those are pain to use but are probably a better route then checking code directly into the project.  

     

    As far as #2 goes, this is one of my biggest issues with labview.  It would be so nice if you could have the functionality of the user.lib folder but have it associated within the project folder or on a project by project basis.  If you go the user.lib route, or use the vipc route, you are forcing the programmer to only work on essentially one project at a time.  It has always felt so heavy to me to have to modify the labview program directory to update re-use libraries. However, if your team is heavy into using the pallets, this is the only way that i know of, to get pallets for your libraries working in labview.  This is also pretty much the standard way of doing things currently.  This is probably the better route to go for the time being.  A nice future labview feature to have would be to load pallets out of the project instead of the user.lib :) 

     

     

    Anyway, the approach i'm considering is similar to NPM, why mess with something that works great.  Each project would have a file associated, similar to package.json, that would contain links to the Git repositories that hold your dependencies for the project.  These dependencies would then be installed into the project folder under a single "dependency" sub folder that you would ignore in your .gitignore.  Each re-use library would have its own subfolder within this dependency folder.  

     

    The benefits of this system would by that you could then only check in your "dependency" file to your repo.  The user would then "install" these dependencies after they checkout the main project repo.  This process would involve running the tool which would clone the individual library repositories into this "dependency subfolder".  The file location relative to the project file should always be constant so you shouldn't have to worry about linking issues.  In addition, it gives you the ability to keep all of your re-use libraries in separate repositories.  If someone updates a re-use library at a later time, all you have to do it pull down the latest version off of that libraries repository.  It also allows you to have different versions of re-use libraries associated with different projects at the same time.  

     

    The main downside i see is that you would lose pallet functionality for these libraries.  I am a big quick drop user, so this doesn't affect me, but could be an issue for others.  Also, this solution probably only works for GIT based source control.  I've used SVN before but i'm not as familiar with it.  There may be other downsides i'm missing i would like to hear about.  

     

    I may start a new thread and if i get something working i'll probably look for feedback from the community.  

  6. Welcome to LVOOP... this is one of my biggest gripes about using it for certain things.  The work arounds listed above help, but using LVOOP design patterns can make some of the simplest things so heavy.  LIke these guys mentioned before, you can make a new class that contains the shift register data of the NI method you want to use and write a custom method for the action want.  Then include this class in your parent and pass it around in each child.  

     

     

    You can work around that.   Have your classes use static preallocate-clone methods, and have a dynamic method that returns a prepared reference to one of these static clones.  You store that reference in your object on creation.  Then use a single call-by-reference in your loop, and all preallocate-clone NI subVIs should work correctly.

     

    — James

     

    This sounds like an interesting approach, but lately, LV2013 SP1, i keep getting burned by hacking around the call-by-reference nodes to try and fix issues like this.  There seem to be some weird deep labview crashing bugs that i keep stumbling upon when i do things like this.  

  7. A tcp/ip server is going to be the most flexible, its just takes more to set up.  

     

    If you do go with network streams have you seen this white paper?  

     

    http://www.ni.com/white-paper/12267/en/

     

    You have to set up your streams correctly otherwise you won't be able to connect.  Also, i believe if you have to shutdown the "server" stream and re-open it if you want to have a new endpoint connect to it.  

  8. He makes some good points but i would guarantee that there are a hundred articles out there that would argue against every point he makes much better than i could.  I would say people just need to work with whats best for them.  

     

    I use GIT and i'm sticking with it.  The ease of branching is what made me switch from SVN.  Making a branch, testing a change and merging it back in is fast and easy. 

     

    I will say i do miss the locking aspect of SVN, especially since labview like to randomly touch files in the project.  

  9. If this is really command you feel is safe for anyone to send at anytime to the CRIO, have you considered handling this "message in a separate loop on the crio that maintains some crio state information.  Then you anyone can send the command at any time and if your CRIO is in a state that can process it, then process it, otherwise discard the message or queue it up for a later time.  

     

    As far as implementing this i'm not 100% what to tell you.  In all my applications i specifically avoid multiple host that have the ability to send commands.  

     

    I'm not sure if a networked buffered shared variable that you can write on multiple host works like you want or not.  Basically a network multi writer queue.  If it does, then it could be a decent route for your application. 

     

    If i was going to do something like that i would probably set up a tcp/ip server on my crio and handle all "commands" using that.  It will accept multiple connections from different hosts.  You could then parse messages(commands) from different hosts based on their IP or connection sequence or a multitude of other things.  

     

    If you haven't seen it yet, this library may help you out with making a TCP/IP server.  

     

    http://www.ni.com/example/27739/en/  

     

    I have used it before on a CRIO to handle commands.

  10. I second Shaun's comment.  I have found that using multiple network streams to connect to different clients becomes extremely CPU dependent for crio's, especially if you need fast updates.   

     

    I'm not 100% sure what you mean buy " transform signal to sound ".  I would probably recommend performing this action on the Host computer either way.  I would push the raw data to the host and then do your transformation on the host.  

     

    What i would recommend would be to make a single network stream that handles all the "commands" that the host might need to send to the CRIO.  The benefit of this is that you would only be able to have one host connected at a time with this stream.  This also gives you a single point from which to issue commands so you don't have multiple people trying to control the crio at once.

     

    As far as sharing data back out to multiple clients, shared variables do work well for this sort of thing.  They have their other downsides but this is probably a good place to start.  I would highly recommend that you only read data using shared variables and not write using them.  All your variables on your host would have their access mode set at "Read" and none as "write.  This once again prevents multiple people from issuing commands at once since you will have multiple clients connecting to the CRIO.

     

    One other thing i might recommend is that the if you do use shared variables, that you set the one that give your your "Sound" raw data to be have network buffering.  Since it appears to be the critical component of your system, this should help with disconnects or network issues, to keep your data stream intact.  

     

    Hope that helps

  11. I've been looking at nodejs a lot lately.  

     

    Its basically server side javascript.  Lets you do the backend and frontend in the same language.  Obviously you still need to learn HTML and CSS but i think the browser is the future for HMI and GUI's for most of my projects.  Its hard to beat the amount of open source material there is out there for the front end browser experience.  

     

    I still want to learn C++ for embedded stuff.  

  12. I was having a similar issue however mine were due to having the gui vi's i put into subpanels be reentrant.  In general i've found strip charts behave kind of strangely and you can't really rely on them to update properly.  

     

    In the end i ended up moving away from the strip chart and instead moving to a x-y graph.  Basically i would maintain an array of points and redraw the graph whenever needed with new points.  

     

    I have attached a quick example of how i did it. 

     

    Hope it helps its in lab view 2013 SP1

     

     

    Also, depending on how you are doing things you could simplify your array handling into an action engine and just call it every time you wanted to update.  

     

    Since i was using shared clones, i had to handle my arrays differently in my class and outside of an action engine.  

    GraphTest.vi

    GraphUpdate.vi

  13. But these aren’t real dependancies.  "A calls B which calls C†is a dependency of A on C.   “A calls B which is in the same library as X, which calls Y, which is in a library that also has Z, which calls C†is not.  I have no interest in managing false dependancies introduced by lvlibs.  

     

     

    This is the real problem.  To simplify my example i had two classes in a library.  One was a GUI Class and the other was a Hardware class.  I did this to group them together for ease of reuse in future projects.  However, I had no intention of using the GUI class on the CRIO so when i re-used my hardware class, i had no idea that the gui class would become a dependency solely b/c they were both in the same library.   My GUI class used subpanels and all sorts of things that CRIO's don't' like and was causing a hard crash out of lab view every time i tried to build my real time application (totally separate issue but also a learning experience).  Granted i was not as familiar with libraries as i am now but still it is confusing and while training may help it seems counter intuitive.  

     

    Paul's suggestion of loading the libraries into an empty project is what eventually helped me track down my issues, and i was familiar with that method from older posts on this forum.  As a side note its a decent way to track down corrupt classes and libraries as well. But all of that just gets me back to, why doesn't the individual class or library "project window" show the dependencies of the library/class.  

     

    I was basically looking for a way to group similar code so that i could re-use it for later projects and what i learned is that instead of libraries i should use GIT repos or VIPM and leave it at that.  

  14. Does this happen at the very end of the build or when it starts?  I've recently had a strange build error, unfortunately i didn't right it down, but it would occur at the very end of building an application.  

    I tracked it down to a "read only" file SEH-RTEH-errors.txt located in C:\Program Files (x86)\National Instruments\Shared\LabVIEW Run-Time\2013\errors\

     

    For some reason after every build this file was being set as "read only" and the build would not complete unless i manually unset this file back to non Read Only.  In the end i had to write a script that i run before i build now.

     

    I have no idea if this helps with your issues but this was a strange one and thought i would share.

  15. After finishing my first truly large scale Labview Project, i have also given up on LVLIB libraries.  I've also given up on Xcontrol's but that's a different story.  I still haven't figured out how that one xcontrol touched everything in my project after i completely removed it and never really used it.

     

    In my case i was trying to deploy code to multiple CRIO's and the cross dependency issues that you all have illustrated for libraries were wreaking havoc on my deployment. I had multiple instances of strange cross dependencies loading so much that by the end of the day most of my source code was being loaded to the crio.  This was causing major problems since GUI libraries were being loaded as well and were causing failed builds.  By the end, i couldn't' figure what was loading what and moved everything out of libraries.  This action alone quickly cleared up the issues i was having and was a huge lesson learned.  I do use labview classes, but i'm very careful to keep them limited and try not to cross reference files unless i know exactly what i'm pulling in.  

     

    As far a name spacing goes, that is a huge issue and i'm not sure how to address it.  I'm pretty much a solo developer so i don't brush up against code from others often and i have my own naming scheme.  

     

     

    On thing i would like to mention is that i try to handle my labview reusable code similar to what i've seen in Nodejs NPM and Python PIP.  I try to load as many dependencies below the project folder as possible so that i have a single dependency path for my project file.  Any of the re-usable code i generate are in separate repo and is checkout'd into a separate folder just below that main project file. I then ignore this folder in source control (GIT).  Basically i treat my git repos as libraries instead of using labview libraries to contain it all.

     

    VIPM is an awesome tool and i use it a lot, but it seems to me that Labview should look as making its dependencies as modular as some of these other languages.  If you could "install" this reusable code to the project folder instead of your vi.lib or user.lib folder it would make sharing code a lot more easier.  Maybe you can do this already with VIPM and i just haven't found it.  Obviously this is problematic for how pallets are used within Labview, but i think it would be a good start.

     

    I probably got off topic as well but just some thoughts.

     

    PS: Can someone please fill me in on why the individual class and library window don't show dependencies.... its so frustrating.  

    • Like 2
  16. See the thing is that if I turn highlight execution both the Event handler loop on top and the Message Handler loop on the bottom finish executing, the queues get destroyed, the events unregistered and destroyed, the whole code executes, gets to the very last node (in the extreme case the VI Server Abort VI method I added), seems to execute and then the running arrow never turns off. 

     

    I am just pushing this to the list of "we will never know". It is definitely not the code, because just changing the launching with start asynchronous call to using the old Run VI method just works. The VI stops right after it executes its last node. And I was able to remove the abort. 

     

    I will the customer if I can share the VI. I was just curious if anyone else had seen that weird behavior. 

     

     

    I have seen this before as well and i couldn't track it down.  There seems to be some strange issues that arise with the "Start Async Call" function sometimes.  There is a thread on the actor framework board related to opening a vi reference twice and it causing issues with the async call.  I'm not sure if this could be related to that at all.  All i know is that there are some low level bugs that seem to pop up every once in awhile with it and they seem to come and go will not explanation :/ 

     

    https://decibel.ni.com/content/message/91608#91608

  17. So i use git as my version control and the thing you will find is that the issues you mention are inherent to all source control options within Labview.  It just isn't built very well for this sort of thing.  I would highly recommend continuing to use source control with it, but it does take some getting used to and can be a pain compared to text based languages.  

     

    A couple of things i would recommend.  Make sure you check the options to "Separate Compiled Code" within labview.  This will save you a lot of headaches on commits due to basic recompiling of the source.  

     

    Also, labview is able to compare and merge some of its binary files such as Vi's and controls using LVCompare and LVmerge.  However you have to set up your git tool or git itself to invoke these executables when performing a diff.

     

    You are correct that Labview does not have a good method of comparing "container structures" such as classes, libraries and projects.  As far as the .lvclass and  project files go, you are correct there really isn't a great way to compare diffs on them.  One trick that can help in certain situations is that if you open these up in a text editor, most of the information is not in binary format and these "container" files are mostly pointers to the location of the files they contain.  So you can compare them at the text based level but it doesn't get you super far.  

     

    As far as comparing snapshots of the entire project, you are correct that you would need to load a new copy on disk and open it up.  You are also correct that if your dependencies aren't structured correctly you will need to rename some items b/c both project will point to the same dependency.  One thing i do to try and get around this is make sure that all my project specific items are lower in the file structure hierarchy than my main project file.  Therefore when labview looks for the dependencies it always looks down the file structure tree of the project file location and not elsewhere.

    This allows you to then check out the entire project to a separate folder on your hard drive and when you open it, there shouldn't be any project specific dependencies that overlap and you shouldn't have to rename any items.

     

    Its a pain but its one way of doing it.  

    • Like 1
  18. I'd like to leave a small note for anyone reading this topic: this turned out to be a highly unmaintainable solution to the challenge. I've bit the bullet and moved to a LVOOP approach, which is converging towards the ESF solution. The required LV skill for using classes is definitely outweighed by the benefits, mainly inheritance and property nodes on DVR wires of the class.

     

    Remember you said this.... 

     

    LVOOP comes with its own set of troubles that i'm sure you will stumble upon in time.  I'm not saying its not the better route, its just not the end all be all :)

  19.  

     

    When both UI and process code need to be aware of some state, e.g. "Move Slowly" option is flagged by the user. Either the process requests that state before obeying a command to move, or that state is copied into the process whenever the user modifies the flag. Which is better and why?

     

    I have struggled with this as i get more and more into labview.  In general i force my process to handle its own state and GUI's to handle their own state and ignore messages based on their current state. In other words, the "Move Slowly" message could be sent at anytime to the process, but since the process is in charge of its own state, it can always choose to ignore it.   I usually push process states changes back to GUI's using the GUI's own message queue.  So in general i don't use the Request Response structure so much as the always send  and let the recipient of the message do with it will with the message based on its state. 

     

    The Request Response option gives you a very loosely coupled system, but i'm with Mike that in most cases i have not found this method absolutely necessary.  In general, i usually keep a copy of the process state in the GUI but i haven't found many situations in my current work where my process needs to know my gui state.  In general i think of my GUI only as a way to send message to my process.  I usually keep copies of any information the process would need to function in the process so that in essence the GUI could change but the process wouldn't need to.  My GUI's are tightly coupled to my process but my process is not tightly coupled to the GUI.  

     

    Others with more experience may have better suggestions.  

     

     

     

    When I change the state of a control, I can either copy its state into a variable in the control loop, or I can maintain references to the controls and read it from the control loop as necessary. I have been doing the former, but I would like to do the latter as it would mean having one less copy of the data. Are there any gotchas I should be aware of?

     

     

     

    I don't know if you use CRIOs, NI embedded options, or even other remote systems, but one argument for sticking with your current method is that you are truly decoupled in that situation and it makes it easier to move your code to other platforms if necessary.  That way you are only passing messages between loops/processes and not references which can't cross network boundaries and things like that.  Just a thought, it may not apply to your situation.  

  20. This is kind of a tricky area with GIT source control.  There are a lot of factors it depends on.  GIT source control doesn't work the same as SVN and its hard to pull down a single file like you can in SVN without locally cloning the entire repository. which you can do with git clone/pull and then remove all the files except the one you want.

     

    Where would this file be "hosted"? Would it be on GitHub/Bitbuck, or some remote computer?  If stored on GitHub/Bitbucket i believe you can provide a download link for individual files from their webpage, however this feature is not native to remote git repositories in general.  You would then have to download this file using whatever method you prefer.

     

    A quick search came up with these threads that might be of some help to you.   

     

    http://stackoverflow.com/questions/7106012/download-a-single-folder-or-directory-from-a-github-repo  & 

     

    http://stackoverflow.com/questions/3642143/get-a-single-file-from-a-remote-git-repository

     

    http://stackoverflow.com/questions/160608/how-to-do-a-git-export-like-svn-export

     

    If you have a local copy of the full repository it appears you can parse the contents of a single file from the command line by using git show HEAD:$path_to_file

    I don't think this will work well for non text based files but should be fine for your config files.  

     

    As far as method, if you only need to perform commands with the command line, then i might recommend using the "System Exec.vi" within labview.  I have used this a lot to perform functions that require parsing STOUT from the command line.

     

    However, Python/C# obviously give you other capabilities for grabbing and parsing the file if needed.  

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.